text
stringlengths
559
401k
source
stringlengths
13
121
In Riemannian geometry, a Jacobi field is a vector field along a geodesic γ {\displaystyle \gamma } in a Riemannian manifold describing the difference between the geodesic and an "infinitesimally close" geodesic. In other words, the Jacobi fields along a geodesic form the tangent space to the geodesic in the space of all geodesics. They are named after Carl Jacobi. == Definitions and properties == Jacobi fields can be obtained in the following way: Take a smooth one parameter family of geodesics γ τ {\displaystyle \gamma _{\tau }} with γ 0 = γ {\displaystyle \gamma _{0}=\gamma } , then J ( t ) = ∂ γ τ ( t ) ∂ τ | τ = 0 {\displaystyle J(t)=\left.{\frac {\partial \gamma _{\tau }(t)}{\partial \tau }}\right|_{\tau =0}} is a Jacobi field, and describes the behavior of the geodesics in an infinitesimal neighborhood of a given geodesic γ {\displaystyle \gamma } . A vector field J along a geodesic γ {\displaystyle \gamma } is said to be a Jacobi field if it satisfies the Jacobi equation: D 2 d t 2 J ( t ) + R ( J ( t ) , γ ˙ ( t ) ) γ ˙ ( t ) = 0 , {\displaystyle {\frac {D^{2}}{dt^{2}}}J(t)+R(J(t),{\dot {\gamma }}(t)){\dot {\gamma }}(t)=0,} where D denotes the covariant derivative with respect to the Levi-Civita connection, R the Riemann curvature tensor, γ ˙ ( t ) = d γ ( t ) / d t {\displaystyle {\dot {\gamma }}(t)=d\gamma (t)/dt} the tangent vector field, and t is the parameter of the geodesic. On a complete Riemannian manifold, for any Jacobi field there is a family of geodesics γ τ {\displaystyle \gamma _{\tau }} describing the field (as in the preceding paragraph). The Jacobi equation is a linear, second order ordinary differential equation; in particular, values of J {\displaystyle J} and D d t J {\displaystyle {\frac {D}{dt}}J} at one point of γ {\displaystyle \gamma } uniquely determine the Jacobi field. Furthermore, the set of Jacobi fields along a given geodesic forms a real vector space of dimension twice the dimension of the manifold. As trivial examples of Jacobi fields one can consider γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} and t γ ˙ ( t ) {\displaystyle t{\dot {\gamma }}(t)} . These correspond respectively to the following families of reparametrizations: γ τ ( t ) = γ ( τ + t ) {\displaystyle \gamma _{\tau }(t)=\gamma (\tau +t)} and γ τ ( t ) = γ ( ( 1 + τ ) t ) {\displaystyle \gamma _{\tau }(t)=\gamma ((1+\tau )t)} . Any Jacobi field J {\displaystyle J} can be represented in a unique way as a sum T + I {\displaystyle T+I} , where T = a γ ˙ ( t ) + b t γ ˙ ( t ) {\displaystyle T=a{\dot {\gamma }}(t)+bt{\dot {\gamma }}(t)} is a linear combination of trivial Jacobi fields and I ( t ) {\displaystyle I(t)} is orthogonal to γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} , for all t {\displaystyle t} . The field I {\displaystyle I} then corresponds to the same variation of geodesics as J {\displaystyle J} , only with changed parametrizations. == Motivating example == On a unit sphere, the geodesics through the North pole are great circles. Consider two such geodesics γ 0 {\displaystyle \gamma _{0}} and γ τ {\displaystyle \gamma _{\tau }} with natural parameter, t ∈ [ 0 , π ] {\displaystyle t\in [0,\pi ]} , separated by an angle τ {\displaystyle \tau } . The geodesic distance d ( γ 0 ( t ) , γ τ ( t ) ) {\displaystyle d(\gamma _{0}(t),\gamma _{\tau }(t))\,} is d ( γ 0 ( t ) , γ τ ( t ) ) = sin − 1 ⁡ ( sin ⁡ t sin ⁡ τ 1 + cos 2 ⁡ t tan 2 ⁡ ( τ / 2 ) ) . {\displaystyle d(\gamma _{0}(t),\gamma _{\tau }(t))=\sin ^{-1}{\bigg (}\sin t\sin \tau {\sqrt {1+\cos ^{2}t\tan ^{2}(\tau /2)}}{\bigg )}.} Computing this requires knowing the geodesics. The most interesting information is just that d ( γ 0 ( π ) , γ τ ( π ) ) = 0 {\displaystyle d(\gamma _{0}(\pi ),\gamma _{\tau }(\pi ))=0\,} , for any τ {\displaystyle \tau } . Instead, we can consider the derivative with respect to τ {\displaystyle \tau } at τ = 0 {\displaystyle \tau =0} : ∂ ∂ τ | τ = 0 d ( γ 0 ( t ) , γ τ ( t ) ) = | J ( t ) | = sin ⁡ t . {\displaystyle {\frac {\partial }{\partial \tau }}{\bigg |}_{\tau =0}d(\gamma _{0}(t),\gamma _{\tau }(t))=|J(t)|=\sin t.} Notice that we still detect the intersection of the geodesics at t = π {\displaystyle t=\pi } . Notice further that to calculate this derivative we do not actually need to know d ( γ 0 ( t ) , γ τ ( t ) ) {\displaystyle d(\gamma _{0}(t),\gamma _{\tau }(t))\,} , rather, all we need do is solve the equation y ″ + y = 0 {\displaystyle y''+y=0\,} , for some given initial data. Jacobi fields give a natural generalization of this phenomenon to arbitrary Riemannian manifolds. == Solving the Jacobi equation == Let e 1 ( 0 ) = γ ˙ ( 0 ) / | γ ˙ ( 0 ) | {\displaystyle e_{1}(0)={\dot {\gamma }}(0)/|{\dot {\gamma }}(0)|} and complete this to get an orthonormal basis { e i ( 0 ) } {\displaystyle {\big \{}e_{i}(0){\big \}}} at T γ ( 0 ) M {\displaystyle T_{\gamma (0)}M} . Parallel transport it to get a basis { e i ( t ) } {\displaystyle \{e_{i}(t)\}} all along γ {\displaystyle \gamma } . This gives an orthonormal basis with e 1 ( t ) = γ ˙ ( t ) / | γ ˙ ( t ) | {\displaystyle e_{1}(t)={\dot {\gamma }}(t)/|{\dot {\gamma }}(t)|} . The Jacobi field can be written in co-ordinates in terms of this basis as J ( t ) = y k ( t ) e k ( t ) {\displaystyle J(t)=y^{k}(t)e_{k}(t)} and thus D d t J = ∑ k d y k d t e k ( t ) , D 2 d t 2 J = ∑ k d 2 y k d t 2 e k ( t ) , {\displaystyle {\frac {D}{dt}}J=\sum _{k}{\frac {dy^{k}}{dt}}e_{k}(t),\quad {\frac {D^{2}}{dt^{2}}}J=\sum _{k}{\frac {d^{2}y^{k}}{dt^{2}}}e_{k}(t),} and the Jacobi equation can be rewritten as a system d 2 y k d t 2 + | γ ˙ | 2 ∑ j y j ( t ) ⟨ R ( e j ( t ) , e 1 ( t ) ) e 1 ( t ) , e k ( t ) ⟩ = 0 {\displaystyle {\frac {d^{2}y^{k}}{dt^{2}}}+|{\dot {\gamma }}|^{2}\sum _{j}y^{j}(t)\langle R(e_{j}(t),e_{1}(t))e_{1}(t),e_{k}(t)\rangle =0} for each k {\displaystyle k} . This way we get a linear ordinary differential equation (ODE). Since this ODE has smooth coefficients we have that solutions exist for all t {\displaystyle t} and are unique, given y k ( 0 ) {\displaystyle y^{k}(0)} and y k ′ ( 0 ) {\displaystyle {y^{k}}'(0)} , for all k {\displaystyle k} . == Examples == Consider a geodesic γ ( t ) {\displaystyle \gamma (t)} with parallel orthonormal frame e i ( t ) {\displaystyle e_{i}(t)} , e 1 ( t ) = γ ˙ ( t ) / | γ ˙ | {\displaystyle e_{1}(t)={\dot {\gamma }}(t)/|{\dot {\gamma }}|} , constructed as above. The vector fields along γ {\displaystyle \gamma } given by γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} and t γ ˙ ( t ) {\displaystyle t{\dot {\gamma }}(t)} are Jacobi fields. In Euclidean space (as well as for spaces of constant zero sectional curvature) Jacobi fields are simply those fields linear in t {\displaystyle t} . For Riemannian manifolds of constant negative sectional curvature − k 2 {\displaystyle -k^{2}} , any Jacobi field is a linear combination of γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} , t γ ˙ ( t ) {\displaystyle t{\dot {\gamma }}(t)} and exp ⁡ ( ± k t ) e i ( t ) {\displaystyle \exp(\pm kt)e_{i}(t)} , where i > 1 {\displaystyle i>1} . For Riemannian manifolds of constant positive sectional curvature k 2 {\displaystyle k^{2}} , any Jacobi field is a linear combination of γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} , t γ ˙ ( t ) {\displaystyle t{\dot {\gamma }}(t)} , sin ⁡ ( k t ) e i ( t ) {\displaystyle \sin(kt)e_{i}(t)} and cos ⁡ ( k t ) e i ( t ) {\displaystyle \cos(kt)e_{i}(t)} , where i > 1 {\displaystyle i>1} . The restriction of a Killing vector field to a geodesic is a Jacobi field in any Riemannian manifold. == See also == Conjugate points Geodesic deviation equation Rauch comparison theorem N-Jacobi field == References == Manfredo Perdigão do Carmo. Riemannian geometry. Translated from the second Portuguese edition by Francis Flaherty. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1992. xiv+300 pp. ISBN 0-8176-3490-8 Jeff Cheeger and David G. Ebin. Comparison theorems in Riemannian geometry. Revised reprint of the 1975 original. AMS Chelsea Publishing, Providence, RI, 2008. x+168 pp. ISBN 978-0-8218-4417-5 Shoshichi Kobayashi and Katsumi Nomizu. Foundations of differential geometry. Vol. II. Reprint of the 1969 original. Wiley Classics Library. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1996. xvi+468 pp. ISBN 0-471-15732-5 Barrett O'Neill. Semi-Riemannian geometry. With applications to relativity. Pure and Applied Mathematics, 103. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1983. xiii+468 pp. ISBN 0-12-526740-1
Wikipedia/Jacobi_equation
In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics. Indeed, the solutions of such problems may involve strong gradients (and even discontinuities) so that classical finite element methods fail, while finite volume methods are restricted to low order approximations. Discontinuous Galerkin methods were first proposed and analyzed in the early 1970s as a technique to numerically solve partial differential equations. In 1973 Reed and Hill introduced a DG method to solve the hyperbolic neutron transport equation. The origin of the DG method for elliptic problems cannot be traced back to a single publication as features such as jump penalization in the modern sense were developed gradually. However, among the early influential contributors were Babuška, J.-L. Lions, Joachim Nitsche and Miloš Zlámal. DG methods for elliptic problems were already developed in a paper by Garth Baker in the setting of 4th order equations in 1977. A more complete account of the historical development and an introduction to DG methods for elliptic problems is given in a publication by Arnold, Brezzi, Cockburn and Marini. A number of research directions and challenges on DG methods are collected in the proceedings volume edited by Cockburn, Karniadakis and Shu. == Overview == Much like the continuous Galerkin (CG) method, the discontinuous Galerkin (DG) method is a finite element method formulated relative to a weak formulation of a particular model system. Unlike traditional CG methods that are conforming, the DG method works over a trial space of functions that are only piecewise continuous, and thus often comprise more inclusive function spaces than the finite-dimensional inner product subspaces utilized in conforming methods. As an example, consider the continuity equation for a scalar unknown ρ {\displaystyle \rho } in a spatial domain Ω {\displaystyle \Omega } without "sources" or "sinks" : ∂ ρ ∂ t + ∇ ⋅ J = 0 , {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0,} where J {\displaystyle \mathbf {J} } is the flux of ρ {\displaystyle \rho } . Now consider the finite-dimensional space of discontinuous piecewise polynomial functions over the spatial domain Ω {\displaystyle \Omega } restricted to a discrete triangulation Ω h {\displaystyle \Omega _{h}} , written as S h p ( Ω h ) = { v | Ω e i ∈ P p ( Ω e i ) , ∀ Ω e i ∈ Ω h } {\displaystyle S_{h}^{p}(\Omega _{h})=\{v_{|\Omega _{e_{i}}}\in P^{p}(\Omega _{e_{i}}),\ \ \forall \Omega _{e_{i}}\in \Omega _{h}\}} for P p ( Ω e i ) {\displaystyle P^{p}(\Omega _{e_{i}})} the space of polynomials with degrees less than or equal to p {\displaystyle p} over element Ω e i {\displaystyle \Omega _{e_{i}}} indexed by i {\displaystyle i} . Then for finite element shape functions N j ∈ P p {\displaystyle N_{j}\in P^{p}} the solution is represented by ρ h i = ∑ j = 1 dofs ρ j i ( t ) N j i ( x ) , ∀ x ∈ Ω e i . {\displaystyle \rho _{h}^{i}=\sum _{j=1}^{\text{dofs}}\rho _{j}^{i}(t)N_{j}^{i}({\boldsymbol {x}}),\quad \forall {\boldsymbol {x}}\in \Omega _{e_{i}}.} Then similarly choosing a test function φ h i ( x ) = ∑ j = 1 dofs φ j i N j i ( x ) , ∀ x ∈ Ω e i , {\displaystyle \varphi _{h}^{i}({\boldsymbol {x}})=\sum _{j=1}^{\text{dofs}}\varphi _{j}^{i}N_{j}^{i}({\boldsymbol {x}}),\quad \forall {\boldsymbol {x}}\in \Omega _{e_{i}},} multiplying the continuity equation by φ h i {\displaystyle \varphi _{h}^{i}} and integrating by parts in space, the semidiscrete DG formulation becomes: d d t ∫ Ω e i ρ h i φ h i d x + ∫ ∂ Ω e i φ h i J h ⋅ n d x = ∫ Ω e i J h ⋅ ∇ φ h i d x . {\displaystyle {\frac {d}{dt}}\int _{\Omega _{e_{i}}}\rho _{h}^{i}\varphi _{h}^{i}\,d{\boldsymbol {x}}+\int _{\partial \Omega _{e_{i}}}\varphi _{h}^{i}\mathbf {J} _{h}\cdot {\boldsymbol {n}}\,d{\boldsymbol {x}}=\int _{\Omega _{e_{i}}}\mathbf {J} _{h}\cdot \nabla \varphi _{h}^{i}\,d{\boldsymbol {x}}.} == Scalar hyperbolic conservation law == A scalar hyperbolic conservation law is of the form ∂ t u + ∂ x f ( u ) = 0 for t > 0 , x ∈ R u ( 0 , x ) = u 0 ( x ) , {\displaystyle {\begin{aligned}\partial _{t}u+\partial _{x}f(u)&=0\quad {\text{for}}\quad t>0,\,x\in \mathbb {R} \\u(0,x)&=u_{0}(x)\,,\end{aligned}}} where one tries to solve for the unknown scalar function u ≡ u ( t , x ) {\displaystyle u\equiv u(t,x)} , and the functions f , u 0 {\displaystyle f,u_{0}} are typically given. === Space discretization === The x {\displaystyle x} -space will be discretized as R = ⋃ k I k , I k := ( x k , x k + 1 ) for x k < x k + 1 . {\displaystyle \mathbb {R} =\bigcup _{k}I_{k}\,,\quad I_{k}:=\left(x_{k},x_{k+1}\right)\quad {\text{for}}\quad x_{k}<x_{k+1}\,.} Furthermore, we need the following definitions h k := | I k | , h := sup k h k , x ^ k := x k + h k 2 . {\displaystyle h_{k}:=|I_{k}|\,,\quad h:=\sup _{k}h_{k}\,,\quad {\hat {x}}_{k}:=x_{k}+{\frac {h_{k}}{2}}\,.} === Basis for function space === We derive the basis representation for the function space of our solution u {\displaystyle u} . The function space is defined as S h p := { v ∈ L 2 ( R ) : v | I k ∈ Π p } for p ∈ N 0 , {\displaystyle S_{h}^{p}:=\left\lbrace v\in L^{2}(\mathbb {R} ):v{\Big |}_{I_{k}}\in \Pi _{p}\right\rbrace \quad {\text{for}}\quad p\in \mathbb {N} _{0}\,,} where v | I k {\displaystyle {v|}_{I_{k}}} denotes the restriction of v {\displaystyle v} onto the interval I k {\displaystyle I_{k}} , and Π p {\displaystyle \Pi _{p}} denotes the space of polynomials of maximal degree p {\displaystyle p} . The index h {\displaystyle h} should show the relation to an underlying discretization given by ( x k ) k {\displaystyle \left(x_{k}\right)_{k}} . Note here that v {\displaystyle v} is not uniquely defined at the intersection points ( x k ) k {\displaystyle (x_{k})_{k}} . At first we make use of a specific polynomial basis on the interval [ − 1 , 1 ] {\displaystyle [-1,1]} , the Legendre polynomials ( P n ) n ∈ N 0 {\displaystyle (P_{n})_{n\in \mathbb {N} _{0}}} , i.e., P 0 ( x ) = 1 , P 1 ( x ) = x , P 2 ( x ) = 1 2 ( 3 x 2 − 1 ) , … {\displaystyle P_{0}(x)=1\,,\quad P_{1}(x)=x\,,\quad P_{2}(x)={\frac {1}{2}}(3x^{2}-1)\,,\quad \dots } Note especially the orthogonality relations ⟨ P i , P j ⟩ L 2 ( [ − 1 , 1 ] ) = 2 2 i + 1 δ i j ∀ i , j ∈ N 0 . {\displaystyle \left\langle P_{i},P_{j}\right\rangle _{L^{2}([-1,1])}={\frac {2}{2i+1}}\delta _{ij}\quad \forall \,i,j\in \mathbb {N} _{0}\,.} Transformation onto the interval [ 0 , 1 ] {\displaystyle [0,1]} , and normalization is achieved by functions ( φ i ) i {\displaystyle (\varphi _{i})_{i}} φ i ( x ) := 2 i + 1 P i ( 2 x − 1 ) for x ∈ [ 0 , 1 ] , {\displaystyle \varphi _{i}(x):={\sqrt {2i+1}}P_{i}(2x-1)\quad {\text{for}}\quad x\in [0,1]\,,} which fulfill the orthonormality relation ⟨ φ i , φ j ⟩ L 2 ( [ 0 , 1 ] ) = δ i j ∀ i , j ∈ N 0 . {\displaystyle \left\langle \varphi _{i},\varphi _{j}\right\rangle _{L^{2}([0,1])}=\delta _{ij}\quad \forall \,i,j\in \mathbb {N} _{0}\,.} Transformation onto an interval I k {\displaystyle I_{k}} is given by ( φ ¯ k i ) i {\displaystyle \left({\bar {\varphi }}_{ki}\right)_{i}} φ ¯ k i := 1 h k φ i ( x − x k h k ) for x ∈ I k , {\displaystyle {\bar {\varphi }}_{ki}:={\frac {1}{\sqrt {h_{k}}}}\varphi _{i}\left({\frac {x-x_{k}}{h_{k}}}\right)\quad {\text{for}}\quad x\in I_{k}\,,} which fulfill ⟨ φ ¯ k i , φ ¯ k j ⟩ L 2 ( I k ) = δ i j ∀ i , j ∈ N 0 ∀ k . {\displaystyle \left\langle {\bar {\varphi }}_{ki},{\bar {\varphi }}_{kj}\right\rangle _{L^{2}(I_{k})}=\delta _{ij}\quad \forall \,i,j\in \mathbb {N} _{0}\forall \,k\,.} For L ∞ {\displaystyle L^{\infty }} -normalization we define φ k i := h k φ ¯ k i {\displaystyle \varphi _{ki}:={\sqrt {h_{k}}}{\bar {\varphi }}_{ki}} , and for L 1 {\displaystyle L^{1}} -normalization we define φ ~ k i := 1 h k φ ¯ k i {\displaystyle {\tilde {\varphi }}_{ki}:={\frac {1}{\sqrt {h_{k}}}}{\bar {\varphi }}_{ki}} , s.t. ‖ φ k i ‖ L ∞ ( I k ) = ‖ φ i ‖ L ∞ ( [ 0 , 1 ] ) =: c i , ∞ and ‖ φ ~ k i ‖ L 1 ( I k ) = ‖ φ i ‖ L 1 ( [ 0 , 1 ] ) =: c i , 1 . {\displaystyle \|\varphi _{ki}\|_{L^{\infty }(I_{k})}=\|\varphi _{i}\|_{L^{\infty }([0,1])}=:c_{i,\infty }\quad {\text{and}}\quad \|{\tilde {\varphi }}_{ki}\|_{L^{1}(I_{k})}=\|\varphi _{i}\|_{L^{1}([0,1])}=:c_{i,1}\,.} Finally, we can define the basis representation of our solutions u h {\displaystyle u_{h}} u h ( t , x ) := ∑ i = 0 p u k i ( t ) φ k i ( x ) for x ∈ ( x k , x k + 1 ) u k i ( t ) = ⟨ u h ( t , ⋅ ) , φ ~ k i ⟩ L 2 ( I k ) . {\displaystyle {\begin{aligned}u_{h}(t,x):=&\sum _{i=0}^{p}u_{ki}(t)\varphi _{ki}(x)\quad {\text{for}}\quad x\in (x_{k},x_{k+1})\\u_{ki}(t)=&\left\langle u_{h}(t,\cdot ),{\tilde {\varphi }}_{ki}\right\rangle _{L^{2}(I_{k})}\,.\end{aligned}}} Note here, that u h {\displaystyle u_{h}} is not defined at the interface positions. Besides, prism bases are employed for planar-like structures, and are capable for 2-D/3-D hybridation. === DG-scheme === The conservation law is transformed into its weak form by multiplying with test functions, and integration over test intervals ∂ t u + ∂ x f ( u ) = 0 ⇒ ⟨ ∂ t u , v ⟩ L 2 ( I k ) + ⟨ ∂ x f ( u ) , v ⟩ L 2 ( I k ) = 0 for ∀ v ∈ S h p ⇔ ⟨ ∂ t u , φ ~ k i ⟩ L 2 ( I k ) + ⟨ ∂ x f ( u ) , φ ~ k i ⟩ L 2 ( I k ) = 0 for ∀ k ∀ i ≤ p . {\displaystyle {\begin{aligned}\partial _{t}u+\partial _{x}f(u)&=0\\\Rightarrow \quad \left\langle \partial _{t}u,v\right\rangle _{L^{2}(I_{k})}+\left\langle \partial _{x}f(u),v\right\rangle _{L^{2}(I_{k})}&=0\quad {\text{for}}\quad \forall \,v\in S_{h}^{p}\\\Leftrightarrow \quad \left\langle \partial _{t}u,{\tilde {\varphi }}_{ki}\right\rangle _{L^{2}(I_{k})}+\left\langle \partial _{x}f(u),{\tilde {\varphi }}_{ki}\right\rangle _{L^{2}(I_{k})}&=0\quad {\text{for}}\quad \forall \,k\;\forall \,i\leq p\,.\end{aligned}}} By using partial integration one is left with d d t u k i ( t ) + f ( u ( t , x k + 1 ) ) φ ~ k i ( x k + 1 ) − f ( u ( t , x k ) ) φ ~ k i ( x k ) − ⟨ f ( u ( t , ⋅ ) ) , φ ~ k i ′ ⟩ L 2 ( I k ) = 0 for ∀ k ∀ i ≤ p . {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}u_{ki}(t)+f(u(t,x_{k+1})){\tilde {\varphi }}_{ki}(x_{k+1})-f(u(t,x_{k})){\tilde {\varphi }}_{ki}(x_{k})-\left\langle f(u(t,\,\cdot \,)),{\tilde {\varphi }}_{ki}'\right\rangle _{L^{2}(I_{k})}=0\quad {\text{for}}\quad \forall \,k\;\forall \,i\leq p\,.\end{aligned}}} The fluxes at the interfaces are approximated by numerical fluxes g {\displaystyle g} with g k := g ( u k − , u k + ) , u k ± := u ( t , x k ± ) , {\displaystyle g_{k}:=g(u_{k}^{-},u_{k}^{+})\,,\quad u_{k}^{\pm }:=u(t,x_{k}^{\pm })\,,} where u k ± {\displaystyle u_{k}^{\pm }} denotes the left- and right-hand sided limits. Finally, the DG-Scheme can be written as d d t u k i ( t ) + g k + 1 φ ~ k i ( x k + 1 ) − g k φ ~ k i ( x k ) − ⟨ f ( u ( t , ⋅ ) ) , φ ~ k i ′ ⟩ L 2 ( I k ) = 0 for ∀ k ∀ i ≤ p . {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}u_{ki}(t)+g_{k+1}{\tilde {\varphi }}_{ki}(x_{k+1})-g_{k}{\tilde {\varphi }}_{ki}(x_{k})-\left\langle f(u(t,\,\cdot \,)),{\tilde {\varphi }}_{ki}'\right\rangle _{L^{2}(I_{k})}=0\quad {\text{for}}\quad \forall \,k\;\forall \,i\leq p\,.\end{aligned}}} == Scalar elliptic equation == A scalar elliptic equation is of the form − ∂ x x u = f ( x ) for x ∈ ( a , b ) u ( x ) = g ( x ) for x = a , b {\displaystyle {\begin{aligned}-\partial _{xx}u&=f(x)\quad {\text{for}}\quad x\in (a,b)\\u(x)&=g(x)\,\quad {\text{for}}\,\quad x=a,b\end{aligned}}} This equation is the steady-state heat equation, where u {\displaystyle u} is the temperature. Space discretization is the same as above. We recall that the interval ( a , b ) {\displaystyle (a,b)} is partitioned into N + 1 {\displaystyle N+1} intervals of length h {\displaystyle h} . We introduce jump [ ⋅ ] {\displaystyle [{}\cdot {}]} and average { ⋅ } {\displaystyle \{{}\cdot {}\}} of functions at the node x k {\displaystyle x_{k}} : [ v ] | x k = v ( x k + ) − v ( x k − ) , { v } | x k = 0.5 ( v ( x k + ) + v ( x k − ) ) {\displaystyle [v]{\Big |}_{x_{k}}=v(x_{k}^{+})-v(x_{k}^{-}),\quad \{v\}{\Big |}_{x_{k}}=0.5(v(x_{k}^{+})+v(x_{k}^{-}))} The interior penalty discontinuous Galerkin (IPDG) method is: find u h {\displaystyle u_{h}} satisfying A ( u h , v h ) + A ∂ ( u h , v h ) = ℓ ( v h ) + ℓ ∂ ( v h ) {\displaystyle A(u_{h},v_{h})+A_{\partial }(u_{h},v_{h})=\ell (v_{h})+\ell _{\partial }(v_{h})} where the bilinear forms A {\displaystyle A} and A ∂ {\displaystyle A_{\partial }} are A ( u h , v h ) = ∑ k = 1 N + 1 ∫ x k − 1 x k ∂ x u h ∂ x v h − ∑ k = 1 N { ∂ x u h } x k [ v h ] x k + ε ∑ k = 1 N { ∂ x v h } x k [ u h ] x k + σ h ∑ k = 1 N [ u h ] x k [ v h ] x k {\displaystyle A(u_{h},v_{h})=\sum _{k=1}^{N+1}\int _{x_{k-1}}^{x_{k}}\partial _{x}u_{h}\partial _{x}v_{h}-\sum _{k=1}^{N}\{\partial _{x}u_{h}\}_{x_{k}}[v_{h}]_{x_{k}}+\varepsilon \sum _{k=1}^{N}\{\partial _{x}v_{h}\}_{x_{k}}[u_{h}]_{x_{k}}+{\frac {\sigma }{h}}\sum _{k=1}^{N}[u_{h}]_{x_{k}}[v_{h}]_{x_{k}}} and A ∂ ( u h , v h ) = ∂ x u h ( a ) v h ( a ) − ∂ x u h ( b ) v h ( b ) − ε ∂ x v h ( a ) u h ( a ) + ε ∂ x v h ( b ) u h ( b ) + σ h ( u h ( a ) v h ( a ) + u h ( b ) v h ( b ) ) {\displaystyle A_{\partial }(u_{h},v_{h})=\partial _{x}u_{h}(a)v_{h}(a)-\partial _{x}u_{h}(b)v_{h}(b)-\varepsilon \partial _{x}v_{h}(a)u_{h}(a)+\varepsilon \partial _{x}v_{h}(b)u_{h}(b)+{\frac {\sigma }{h}}{\big (}u_{h}(a)v_{h}(a)+u_{h}(b)v_{h}(b){\big )}} The linear forms ℓ {\displaystyle \ell } and ℓ ∂ {\displaystyle \ell _{\partial }} are ℓ ( v h ) = ∫ a b f v h {\displaystyle \ell (v_{h})=\int _{a}^{b}fv_{h}} and ℓ ∂ ( v h ) = − ε ∂ x v h ( a ) g ( a ) + ε ∂ x v h ( b ) g ( b ) + σ h ( g ( a ) v h ( a ) + g ( b ) v h ( b ) ) {\displaystyle \ell _{\partial }(v_{h})=-\varepsilon \partial _{x}v_{h}(a)g(a)+\varepsilon \partial _{x}v_{h}(b)g(b)+{\frac {\sigma }{h}}{\big (}g(a)v_{h}(a)+g(b)v_{h}(b){\big )}} The penalty parameter σ {\displaystyle \sigma } is a positive constant. Increasing its value will reduce the jumps in the discontinuous solution. The term ε {\displaystyle \varepsilon } is chosen to be equal to − 1 {\displaystyle -1} for the symmetric interior penalty Galerkin method; it is equal to + 1 {\displaystyle +1} for the non-symmetric interior penalty Galerkin method. == Direct discontinuous Galerkin method == The direct discontinuous Galerkin (DDG) method is a new discontinuous Galerkin method for solving diffusion problems. In 2009, Liu and Yan first proposed the DDG method for solving diffusion equations. The advantage of this method compared with Discontinuous Galerkin method is that the direct discontinuous Galerkin method derives the numerical format by directly taking the numerical flux of the function and the first derivative term without introducing intermediate variables. We can still get reasonable numerical results by using this method, and due to the simpler derivation process, the amount of calculation is greatly reduced. The direct discontinuous finite element method is a branch of the Discontinuous Galerkin methods. It mainly includes transforming the problem into variational form, regional unit splitting, constructing basis functions, forming and solving discontinuous finite element equations, and convergence and error analysis. For example, consider a nonlinear diffusion equation, which is one-dimensional: U t − ( a ( U ) ⋅ U x ) x = 0 i n ( 0 , 1 ) × ( 0 , T ) {\displaystyle U_{t}-{(a(U)\cdot U_{x})}_{x}=0\ \ in\ (0,1)\times (0,T)} , in which U ( x , 0 ) = U 0 ( x ) o n ( 0 , 1 ) {\displaystyle U(x,0)=U_{0}(x)\ \ on\ (0,1)} === Space discretization === Firstly, define { I j = ( x j − 1 2 , x j + 1 2 ) , j = 1... N } {\displaystyle \left\{I_{j}=\left(x_{j-{\frac {1}{2}}},\ x_{j+{\frac {1}{2}}}\right),j=1...N\right\}} , and Δ x j = x j + 1 2 − x j − 1 2 {\displaystyle \Delta x_{j}=x_{j+{\frac {1}{2}}}-x_{j-{\frac {1}{2}}}} . Therefore we have done the space discretization of x {\displaystyle x} . Also, define Δ x = max 1 ≤ j < N Δ x j {\displaystyle \Delta x=\max _{1\leq j<N}\ \Delta x_{j}} . We want to find an approximation u {\displaystyle u} to U {\displaystyle U} such that ∀ t ∈ [ 0 , T ] {\displaystyle \forall t\in \left[0,T\right]} , u ∈ V Δ x {\displaystyle u\in \mathbb {V} _{\Delta x}} , V Δ x := { v ∈ L 2 ( 0 , 1 ) : v | I j ∈ P k ( I j ) , j = 1 , . . . , N } {\displaystyle \mathbb {V} _{\Delta x}:=\left\{v\in L^{2}\left(0,1\right):{v|}_{I_{j}}\in P^{k}\left(I_{j}\right),\ j=1,...,N\right\}} , P k ( I j ) {\displaystyle P^{k}\left(I_{j}\right)} is the polynomials space in I j {\displaystyle I_{j}} with degree at most k {\displaystyle k} . ==== Formulation of the scheme ==== Flux: h := h ( U , U x ) = a ( U ) U x {\displaystyle h:=h\left(U,U_{x}\right)=a\left(U\right)U_{x}} . U {\displaystyle U} : the exact solution of the equation. Multiply the equation with a smooth function v ∈ H 1 ( 0 , 1 ) {\displaystyle v\in H^{1}\left(0,1\right)} so that we obtain the following equations: ∫ I j U t v d x − h j + 1 2 v j + 1 2 + h j − 1 2 v j − 1 2 + ∫ a ( U ) U x v x d x = 0 {\displaystyle \int _{I_{j}}U_{t}vdx-h_{j+{\frac {1}{2}}}v_{j+{\frac {1}{2}}}+h_{j-{\frac {1}{2}}}v_{j-{\frac {1}{2}}}+\int a\left(U\right)U_{x}v_{x}dx=0} , ∫ I j U ( x , 0 ) v ( x ) d x = ∫ I j U 0 ( x ) v ( x ) d x {\displaystyle \int _{I_{j}}U\left(x,0\right)v\left(x\right)dx=\int _{I_{j}}U_{0}\left(x\right)v\left(x\right)dx} Here v {\displaystyle v} is arbitrary, the exact solution U {\displaystyle U} of the equation is replaced by the approximate solution u {\displaystyle u} , that is to say, the numerical solution we need is obtained by solving the differential equations. === The numerical flux === Choosing a proper numerical flux is critical for the accuracy of DDG method. The numerical flux needs to satisfy the following conditions: ♦ It is consistent with h = b ( u ) x = a ( u ) u x {\displaystyle h={b\left(u\right)}_{x}=a\left(u\right)u_{x}} ♦ The numerical flux is conservative in the single value on x j + 1 2 {\displaystyle x_{j+{\frac {1}{2}}}} . ♦ It has the L 2 {\displaystyle L^{2}} -stability; ♦ It can improve the accuracy of the method. Thus, a general scheme for numerical flux is given: h ^ = D x b ( u ) = β 0 [ b ( u ) ] Δ x + b ( u ) x ¯ + ∑ m = 1 k 2 β m ( Δ x ) 2 m − 1 [ ∂ x 2 m b ( u ) ] {\displaystyle {\widehat {h}}=D_{x}b(u)=\beta _{0}{\frac {\left[b\left(u\right)\right]}{\Delta x}}+{\overline {{b\left(u\right)}_{x}}}+\sum _{m=1}^{\frac {k}{2}}\beta _{m}{\left(\Delta x\right)}^{2m-1}\left[\partial _{x}^{2m}b\left(u\right)\right]} In this flux, k {\displaystyle k} is the maximum order of polynomials in two neighboring computing units. [ ⋅ ] {\displaystyle \left[\cdot \right]} is the jump of a function. Note that in non-uniform grids, Δ x {\displaystyle \Delta x} should be ( Δ x j + Δ x j + 1 2 ) {\displaystyle \left({\frac {\Delta x_{j}+\Delta x_{j+1}}{2}}\right)} and 1 N {\displaystyle {\frac {1}{N}}} in uniform grids. === Error estimates === Denote that the error between the exact solution U {\displaystyle U} and the numerical solution u {\displaystyle u} is e = u − U {\displaystyle e=u-U} . We measure the error with the following norm: | | | v ( ⋅ , t ) | | | = ( ∫ 0 1 v 2 d x + ( 1 − γ ) ∫ 0 t ∑ j = 1 N ∫ I j v x 2 d x d τ + α ∫ 0 t ∑ j = 1 N [ v ] 2 / Δ x ⋅ d τ ) 0.5 {\displaystyle \left|\left|\left|v(\cdot ,t)\right|\right|\right|={\left(\int _{0}^{1}v^{2}dx+\left(1-\gamma \right)\int _{0}^{t}\sum _{j=1}^{N}\int _{I_{j}}v_{x}^{2}dxd\tau +\alpha \int _{0}^{t}\sum _{j=1}^{N}{\left[v\right]}^{2}/\Delta x\cdot d\tau \right)}^{0.5}} and we have | | | U ( ⋅ , T ) | | | ≤ | | | U ( ⋅ , 0 ) | | | {\displaystyle \left|\left|\left|U(\cdot ,T)\right|\right|\right|\leq \left|\left|\left|U(\cdot ,0)\right|\right|\right|} , | | | u ( ⋅ , T ) | | | ≤ | | | U ( ⋅ , 0 ) | | | {\displaystyle \left|\left|\left|u(\cdot ,T)\right|\right|\right|\leq \left|\left|\left|U(\cdot ,0)\right|\right|\right|} == See also == Galerkin method == References == D.N. Arnold, F. Brezzi, B. Cockburn and L.D. Marini, Unified analysis of discontinuous Galerkin methods for elliptic problems, SIAM J. Numer. Anal. 39(5):1749–1779, 2002. G. Baker, Finite element methods for elliptic equations using nonconforming elements, Math. Comp. 31 (1977), no. 137, 45–59. A. Cangiani, Z. Dong, E.H. Georgoulis, and P. Houston, hp-Version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes, SpringerBriefs in Mathematics, (December 2017). W. Mai, J. Hu, P. Li, and H. Zhao, “An efficient and stable 2-D/3-D hybrid discontinuous Galerkin time-domain analysis with adaptive criterion for arbitrarily shaped antipads in dispersive parallel-plate pair,” IEEE Trans. Microw. Theory Techn., vol. 65, no. 10, pp. 3671–3681, Oct. 2017. W. Mai et al., “A straightforward updating criterion for 2-D/3-D hybrid discontinuous Galerkin time-domain method controlling comparative error,” IEEE Trans. Microw. Theory Techn., vol. 66, no. 4, pp. 1713–1722, Apr. 2018. B. Cockburn, G. E. Karniadakis and C.-W. Shu (eds.), Discontinuous Galerkin methods. Theory, computation and applications, Lecture Notes in Computational Science and Engineering, 11. Springer-Verlag, Berlin, 2000. P. Lesaint, and P. A. Raviart. "On a finite element method for solving the neutron transport equation." Mathematical aspects of finite elements in partial differential equations 33 (1974): 89–123. D.A. Di Pietro and A. Ern, Mathematical Aspects of Discontinuous Galerkin Methods. Mathématiques et Applications, Vol. 69, Springer-Verlag, Berlin, 2011. J.S. Hesthaven and T. Warburton, Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications. Springer Texts in Applied Mathematics 54. Springer Verlag, New York, 2008. B. Rivière, Discontinuous Galerkin Methods for Solving Elliptic and Parabolic Equations: Theory and Implementation. SIAM Frontiers in Applied Mathematics, 2008. CFD Wiki http://www.cfd-online.com/Wiki/Discontinuous_Galerkin W.H. Reed and T.R. Hill, Triangular mesh methods for the neutron transport equation, Tech. Report LA-UR-73–479, Los Alamos Scientific Laboratory, 1973.
Wikipedia/Discontinuous_Galerkin_method
In mathematics, an elliptic partial differential equation is a type of partial differential equation (PDE). In mathematical modeling, elliptic PDEs are frequently used to model steady states, unlike parabolic PDE and hyperbolic PDE which generally model phenomena that change in time. The canonical examples of elliptic PDEs are Laplace's Equation and Poisson's Equation. Elliptic PDEs are also important in pure mathematics, where they are fundamental to various fields of research such as differential geometry and optimal transport. == Definition == Elliptic differential equations appear in many different contexts and levels of generality. First consider a second-order linear PDE for an unknown function of two variables u = u ( x , y ) {\displaystyle u=u(x,y)} , written in the form A u x x + 2 B u x y + C u y y + D u x + E u y + F u + G = 0 , {\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+Du_{x}+Eu_{y}+Fu+G=0,} where A, B, C, D, E, F, and G are functions of ( x , y ) {\displaystyle (x,y)} , using subscript notation for the partial derivatives. The PDE is called elliptic if B 2 − A C < 0 , {\displaystyle B^{2}-AC<0,} by analogy to the equation for a planar ellipse. Equations with B 2 − A C = 0 {\displaystyle B^{2}-AC=0} are termed parabolic while those with B 2 − A C > 0 {\displaystyle B^{2}-AC>0} are hyperbolic. For a general linear second-order PDE, the unknown u can be a function of any number of independent variables, u = u ( x 1 , … , x n ) {\displaystyle u=u(x_{1},\ldots ,x_{n})} , satisfying an equation of the form ∑ i = 1 n ∑ j = 1 n a i j ( x 1 , … , x n ) u x i x j + ∑ i = 1 n b i ( x 1 , … , x n ) u x i + c ( x 1 , … , x n ) u = f ( x 1 , … , x n ) . {\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}(x_{1},\ldots ,x_{n})u_{x_{i}x_{j}}+\sum _{i=1}^{n}b_{i}(x_{1},\ldots ,x_{n})u_{x_{i}}+c(x_{1},\ldots ,x_{n})u=f(x_{1},\ldots ,x_{n}).} where a i j , b i , c , f {\displaystyle a_{ij},b_{i},c,f} are functions defined on the domain subject to the symmetry a i j = a j i {\displaystyle a_{ij}=a_{ji}} . This equation is called elliptic if, viewing a = ( a i j ) {\displaystyle a=(a_{ij})} as a function of ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} valued in the space of n × n {\displaystyle n\times n} symmetric matrices, all eigenvalues are greater than some positive constant: that is, there is a positive number θ such that ∑ i = 1 n ∑ j = 1 n a i j ( x 1 , … , x n ) ξ i ξ j ≥ θ ( ξ 1 2 + ⋯ + ξ n 2 ) {\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}(x_{1},\ldots ,x_{n})\xi _{i}\xi _{j}\geq \theta (\xi _{1}^{2}+\cdots +\xi _{n}^{2})} for every point ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} in the domain and all real numbers ξ1, ..., ξn. The simplest example of a second-order linear elliptic PDE is the Laplace equation, in which the coefficients are the constant functions a i j = 0 {\displaystyle a_{ij}=0} for i ≠ j {\displaystyle i\neq j} , a i i = 1 {\displaystyle a_{ii}=1} , and b i = c = f = 0 {\displaystyle b_{i}=c=f=0} . The Poisson equation is a slightly more general second-order linear elliptic PDE, in which f is not required to vanish. For both of these equations, the ellipticity constant θ can be taken to be 1. The terminology is not used consistently throughout the literature: what is called "elliptic" by some authors is called "strictly elliptic" or "uniformly elliptic" by others. === Nonlinear and higher-order equations === Ellipticity can also be formulated for much more general classes of equations. For the most general second-order PDE, which is of the form F ( D 2 u , D u , u , x 1 , … , x n ) = 0 {\displaystyle F(D^{2}u,Du,u,x_{1},\ldots ,x_{n})=0} for some given function F, ellipticity is defined by linearizing the equation and applying the above linear definition. Since linearization is done at a particular function u, this means that ellipticity of a nonlinear second-order PDE depends not only on the equation itself but also on the solutions under consideration. For example, the simplest Monge–Ampère equation involves the determinant of the hessian matrix of the unknown function: det D 2 u = f . {\displaystyle \det D^{2}u=f.} As follows from Jacobi's formula for the derivative of a determinant, this equation is elliptic if f is a positive function and solutions satisfy the constraint of being uniformly convex. There are also higher-order elliptic PDE, the simplest example being the fourth-order biharmonic equation. Even more generally, there is an important class of elliptic systems which consist of coupled partial differential equations for multiple unknown functions. For example, the Cauchy–Riemann equations from complex analysis can be viewed as a first-order elliptic system for a pair of two-variable real functions. Moreover, the class of elliptic PDE (of any order, including systems) is subject to various notions of weak solutions, i.e., reformulating the equations in a way that allows for solutions with various irregularities (e.g. non-differentiability, singularities or discontinuities), so as to model non-smooth physical phenomena. Such solutions are also important in variational calculus, where the direct method often produces weak solutions for elliptic systems of Euler equations. == Canonical form == Consider a second-order elliptic partial differential equation A ( x , y ) u x x + 2 B ( x , y ) u x y + C ( x , y ) u y y + f ( u x , u y , u , x , y ) = 0 {\displaystyle A(x,y)u_{xx}+2B(x,y)u_{xy}+C(x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0} for a two-variable function u = u ( x , y ) {\displaystyle u=u(x,y)} . This equation is linear in the "leading" highest-order terms, but allows nonlinear expressions involving the function values and their first derivatives; this is sometimes called a quasilinear equation. A canonical form asks for a transformation ( w , z ) = ( w ( x , y ) , z ( x , y ) ) {\displaystyle (w,z)=(w(x,y),z(x,y))} of the ( x , y ) {\displaystyle (x,y)} domain so that, when u is viewed as a function of w and z, the above equation takes the form u w w + u z z + F ( u w , u z , u , w , z ) = 0 {\displaystyle u_{ww}+u_{zz}+F(u_{w},u_{z},u,w,z)=0} for some new function F. The existence of such a transformation can be established locally if A, B, and C are real-analytic functions and, with more elaborate work, even if they are only continuously differentiable. Locality means that the necessary coordinate transformations may fail to be defined on the entire domain of u, only in some small region surrounding any particular point of the domain. Formally establishing the existence of such transformations uses the existence of solutions to the Beltrami equation. From the perspective of differential geometry, the existence of a canonical form is equivalent to the existence of isothermal coordinates for the associated Riemannian metric A ( x , y ) d x 2 + 2 B ( x , y ) d x d y + C ( x , y ) d y 2 {\displaystyle A(x,y)dx^{2}+2B(x,y)\,dx\,dy+C(x,y)dy^{2}} on the domain. (The ellipticity condition for the PDE, namely the positivity of AC – B2, is what ensures that either this tensor or its negation is indeed a Riemannian metric.) For second-order quasilinear elliptic partial differential equations in more than two variables, a canonical form does not usually exist. This corresponds to the fact that isothermal coordinates do not exist for general Riemannian metrics in higher dimensions, only for very particular ones. === Characteristics and regularity === For the general second-order linear PDE, characteristics are defined as the null directions for the associated tensor ∑ i = 1 n ∑ j = 1 n a i , j ( x 1 , … , x n ) d x i d x j , {\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}a_{i,j}(x_{1},\ldots ,x_{n})\,dx^{i}\,dx^{j},} called the principal symbol. Using the technology of the wave front set, characteristics are significant in understanding how irregular points of f propagate to the solution u of the PDE. Informally, the wave front set of a function consists of the points of non-smoothness, in addition to the directions in frequency space causing the lack of smoothness. It is a fundamental fact that the application of a linear differential operator with smooth coefficients can only have the effect of removing points from the wave front set. However, all points of the original wave front set (and possibly more) are recovered by adding back in the (real) characteristic directions of the operator. In the case of a linear elliptic operator P with smooth coefficients, the principal symbol is a Riemannian metric and there are no real characteristic directions. According to the previous paragraph, it follows that the wave front set of a solution u coincides exactly with that of Pu = f. This sets up a basic regularity theorem, which says that if f is smooth (so that its wave front set is empty) then the solution u is smooth as well. More generally, the points where u fails to be smooth coincide with the points where f is not smooth. This regularity phenomena is in sharp contrast with, for example, hyperbolic PDE in which discontinuities can form even when all the coefficients of an equation are smooth. Solutions of elliptic PDEs are naturally associated with time-independent solutions of parabolic PDEs or hyperbolic PDEs. For example, a time-independent solution of the heat equation solves Laplace's equation. That is, if parabolic and hyperbolic PDEs are associated with modeling dynamical systems then the solutions of elliptic PDEs are associated with steady states. Informally, this is reflective of the above regularity theorem, as steady states are generally smoothed out versions of truly dynamical solutions. However, PDE used in modeling are often nonlinear and the above regularity theorem only applies to linear elliptic equations; moreover, the regularity theory for nonlinear elliptic equations is much more subtle, with solutions not always being smooth. == See also == Elliptic boundary value problem Elliptic operator Hyperbolic partial differential equation Parabolic partial differential equation Maximum principle (property of solutions) Sobolev space == Notes == == References == == Further reading == == External links == "Elliptic partial differential equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Elliptic partial differential equation, numerical methods", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Elliptic Partial Differential Equation". MathWorld.
Wikipedia/Elliptic_partial_differential_equation
The extended finite element method (XFEM), is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method (FEM) approach by enriching the solution space for solutions to differential equations with discontinuous functions. == History == The extended finite element method (XFEM) was developed in 1999 by Ted Belytschko and collaborators, to help alleviate shortcomings of the finite element method and has been used to model the propagation of various discontinuities: strong (cracks) and weak (material interfaces). The idea behind XFEM is to retain most advantages of meshfree methods while alleviating their negative sides. == Rationale == The extended finite element method was developed to ease difficulties in solving problems with localized features that are not efficiently resolved by mesh refinement. One of the initial applications was the modelling of fractures in a material. In this original implementation, discontinuous basis functions are added to standard polynomial basis functions for nodes that belonged to elements that are intersected by a crack to provide a basis that included crack opening displacements. A key advantage of XFEM is that in such problems the finite element mesh does not need to be updated to track the crack path. Subsequent research has illustrated the more general use of the method for problems involving singularities, material interfaces, regular meshing of microstructural features such as voids, and other problems where a localized feature can be described by an appropriate set of basis functions. == Principle == Enriched finite element methods extend, or enrich, the approximation space so that it is able to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with eXtended Finite Element Methods suppresses the need to mesh and remesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods, at the cost of restricting the discontinuities to mesh edges. == Existing XFEM codes == There exists several research codes implementing this technique to various degrees. GetFEM++ xfem++ openxfem++ Dynaflow eXlibris ngsxfem XFEM has also been implemented in code like Altair Radioss, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.). == References ==
Wikipedia/Extended_finite_element_method
In computer science, the Actor model and process calculi are two closely related approaches to the modelling of concurrent digital computation. See Actor model and process calculi history. There are many similarities between the two approaches, but also several differences (some philosophical, some technical): There is only one Actor model (although it has numerous formal systems for design, analysis, verification, modeling, etc.); there are numerous process calculi, developed for reasoning about a variety of different kinds of concurrent systems at various levels of detail (including calculi that incorporate time, stochastic transitions, or constructs specific to application areas such as security analysis). The Actor model was inspired by the laws of physics and depends on them for its fundamental axioms, i.e. physical laws (see Actor model theory); the process calculi were originally inspired by algebra (Milner 1993). Processes in the process calculi are anonymous, and communicate by sending messages either through named channels (synchronous or asynchronous), or via ambients (which can also be used to model channel-like communications (Cardelli and Gordon 1998)). In contrast, actors in the Actor model possess an identity, and communicate by sending messages to the mailing addresses of other actors (this style of communication can also be used to model channel-like communications—see below). The publications on the Actor model and on process calculi have a fair number of cross-references, acknowledgments, and reciprocal citations (see Actor model and process calculi history). == How channels work == Indirect communication using channels (e.g. Gilles Kahn and David MacQueen [1977]) has been an important issue for communication in parallel and concurrent computation affecting both semantics and performance. Some process calculi differ from the Actor model in their use of channels as opposed to direct communication. == Synchronous channels == Synchronous channels have the property that a sender putting a message in the channel must wait for a receiver to get the message out of the channel before the sender can proceed. === Simple synchronous channels === A synchronous channel can be modeled by an Actor that receives put and get communications. The following is the behavior of an Actor for a simple synchronous channel: Each put communication has a message and an address to which an acknowledgment is sent when the message is received by a get communication from the channel in FIFO order. Each get communication has an address to which the received message is sent. === Synchronous channels in process calculi === However, simple synchronous channels do not suffice for process calculi such as Communicating Sequential Processes (CSP) [Hoare 1978 and 1985] because use of the guarded choice (after Dijkstra) command (called the alternative command in CSP). In a guarded choice command multiple offers (called guards) can be made concurrently on multiple channels to put and get messages; however at most one of the guards can be chosen for each execution of the guarded choice command. Because only one guard can be chosen, a guarded choice command in general effectively requires a kind of two-phase commit protocol or perhaps even a three-phase commit protocol if time-outs are allowed in guards (as in Occam 3 [1992]). Consider the following program written in CSP [Hoare 1978]: [X :: Z!stop() || Y :: guard: boolean; guard := true; *[guard → Z!go(); Z?guard] || Z :: n: integer; n:= 0; *[X?stop() → Y!false; print!n; [] Y?go() → n := n+1; Y!true] ] According to Clinger [1981], this program illustrates global nondeterminism, since the nondeterminism arises from incomplete specification of the timing of signals between the three processes X, Y, and Z. The repetitive guarded command in the definition of Z has two alternatives: the stop message is accepted from X, in which case Y is sent the value false and print is sent the value n a go message is accepted from Y, in which case n is incremented and Y is sent the value true. If Z ever accepts the stop message from X, then X terminates. Accepting the stop causes Y to be sent false which when input as the value of its guard will cause Y to terminate. When both X and Y have terminated, Z terminates because it no longer has live processes providing input. In the above program, there are synchronous channels from X to Z, Y to Z, and Z to Y. === Analogy with the committee coordination problem === According to Knabe [1992], Chandy and Misra [1988] characterized this as analogous to the committee coordination problem: Professors in a university are assigned to various committees. Occasionally a professor will decide to attend a meeting of any of her committees, and will wait until that is possible. Meetings may begin only if there is full attendance. The task is to ensure that if all the members of a committee are waiting, then at least one of them will attend some meeting. The crux of this problem is that two or more committees might share a professor. When that professor becomes available, she can only choose one of the meetings, while the others continue to wait. === A simple distributed protocol === This section presents a simple distributed protocol for channels in synchronous process calculi. The protocol has some problems that are addressed in the sections below. The behavior of a guarded choice command is as follows: The command sends a message to each of its guards to prepare. When it receives the first response from one of its guards that it is prepared, then it sends a message to that guard to prepare to commit and sends messages to all of the other guards to abort. When it receives a message from the guard that it is prepared to commit, then it sends the guard a commit message. However, if the guard throws an exception that it cannot prepare to commit, then guarded choice command starts the whole process all over again. If all of its guards respond that they cannot prepare, then the guarded command does nothing. The behavior of a guard is as follows: When a message to prepare is received, then the guard sends a prepare message to each of the channels with which it is offering to communicate. If the guard has booleans such that it cannot prepare or if any of the channels respond that they cannot prepare, then it sends abort messages to the other channels and then responds that it cannot prepare. When a message to prepare to commit is received, then the guard sends a prepare to commit message to each of the channels. If any of the channels respond that they cannot prepare to commit, then it sends abort messages to the other channels and then throws an exception that it cannot prepare to commit. When a message to commit is received, then the guard sends a commit message to each of the channels. When a message to abort is received, then the guard sends an abort message to each of the channels. The behavior of a channel is as follows: When a prepare to put communication is received, then respond that it is prepared if there is a prepare to get communication pending unless a terminate communication has been received, in which case throw an exception that it cannot prepare to put. When a prepare to get communication is received, then respond that it is prepared if there is a prepare to put communication pending unless a terminate communication has been received, in which case throw an exception that it cannot prepare to get. When a prepare to commit to put communication is received, then respond that it is prepared if there is a prepare to commit to get communication pending unless a terminate communication has been received, in which case throw an exception that it cannot prepare to commit to put. When a prepare to commit to get communication is received, then respond that it is prepared if there is a prepare to commit to put communication pending unless a terminate communication has been received, in which case throw an exception that it cannot prepare to commit to get. When a commit put communication is received, then depending on which of the following is received: When a commit get communication is received, then if not already done perform the put and get and clean up the preparations. When an abort get communication is received, then cancel the preparations When a commit get communication is received, then depending on which of the following is received: When a commit put communication is received, then if not already done perform the get and put and clean up the preparations. When an abort put communication is received, then cancel the preparations. When an abort put communication is received, then cancel the preparations. When an abort get communication is received, then cancel the preparations. === Starvation on getting from multiple channels === Again consider the program written in CSP (discussed in Synchronous channels in process calculi above): [X :: Z!stop() || Y :: guard: boolean; guard := true; *[guard → Z!go(); Z?guard] || Z :: n: integer; n:= 0; *[X?stop() → Y!false; print!n; [] Y?go() → n := n+1; Y!true] ] As pointed out in Knabe [1992], a problem with the above protocol (A simple distributed protocol) is that the process Z might never accept the stop message from X (a phenomenon called starvation) and consequently the above program might never print anything. In contrast consider, a simple Actor system that consists of Actors X, Y, Z, and print where the Actor X is created with the following behavior: If the message "start" is received, then send Z the message "stop" the Actor Y is created with the following behavior: If the message "start" is received, then send Z the message "go" If the message true is received, then send Z the message "go" If the message false is received, then do nothing the Actor Z is created with the following behavior that has a count n that is initially 0: If the message "start" is received, then do nothing. If the message "stop" is received, then send Y the message false and send print the message the count n. If the message "go" is received, then send Y the message true and process the next message received with count n being n+1. By the laws of Actor semantics, the above Actor system will always halt when the Actors X, Y, are Z are each sent a "start" message resulting in sending print a number that can be unbounded large. The difference between the CSP program and the Actor system is that the Actor Z does not get messages using a guarded choice command from multiple channels. Instead it processes messages in arrival ordering, and by the laws for Actor systems, the stop message is guaranteed to arrive. === Livelock on getting from multiple channels === Consider the following program written in CSP [Hoare 1978]: [Bidder1 :: b: bid; *[Bids1?b → process1!b; [] Bids2?b → process1!b;] || Bidder2 :: b: bid; *[Bids1?b → process2!b; [] Bids2?b → process2!b;] ] As pointed out in Knabe [1992], an issue with the above protocol (A simple distributed protocol) is that the process Bidder2 might never accept a bid from Bid1 or Bid2 (a phenomenon called livelock) and consequently process2 might never be sent anything. In each attempt to accept a message, Bidder2 is thwarted because the bid that was offered by Bids1 or Bids2 is snatched away by Bidder1 because it turns out that Bidder1 has much faster access than Bidder2 to Bids1 and Bids2. Consequently, Bidder1 can accept a bid, process it and accept another bid before Bidder2 can commit to accepting a bid. === Efficiency === As pointed out in Knabe [1992], an issue with the above protocol (A simple distributed protocol) is the large number of communications that must be sent in order to perform the handshaking in order to send a message through a synchronous channel. Indeed, as shown in the previous section (Livelock), the number of communications can be unbounded. === Summary of Issues === The subsections above have articulated the following three issues concerned with the use of synchronous channels for process calculi: Starvation. The use of synchronous channels can cause starvation when a process attempts to get messages from multiple channels in a guarded choice command. Livelock. The use of synchronous channels can cause a process to be caught in livelock when it attempts to get messages from multiple channels in a guarded choice command. Efficiency. The use of synchronous channels can require a large number of communications in order to get messages from multiple channels in a guarded choice command. It is notable that in all of the above, issues arise from the use of a guarded choice command to get messages from multiple channels. == Asynchronous channels == Asynchronous channels have the property that a sender putting a message in the channel need not wait for a receiver to get the message out of the channel. === Simple asynchronous channels === An asynchronous channel can be modeled by an Actor that receives put and get communications. The following is the behavior of an Actor for a simple asynchronous channel: Each put communication has a message and an address to which an acknowledgment is sent immediately (without waiting for the message to be gotten by a get communication). Each get communication has an address to which the gotten message is sent. === Asynchronous channels in process calculi === The Join-calculus programming language (published in 1996) implemented local and distributed concurrent computations. It incorporated asynchronous channels as well as a kind of synchronous channel that is used for procedure calls. Agha's Aπ Actor calculus (Agha and Thati 2004) is based on a typed version of the asynchronous π-calculus. == Algebras == The use of algebraic techniques was pioneered in the process calculi. Subsequently, several different process calculi intended to provide algebraic reasoning about Actor systems have been developed in (Gaspari and Zavattaro 1997), (Gaspari and Zavattaro 1999), (Agha and Thati 2004). == Denotational semantics == Will Clinger (building on the work of Irene Greif [1975], Gordon Plotkin [1976], Henry Baker [1978], Michael Smyth [1978], and Francez, Hoare, Lehmann, and de Roever [1979]) published the first satisfactory mathematical denotational theory of the Actor model using domain theory in his dissertation in 1981. His semantics contrasted the unbounded nondeterminism of the Actor model with the bounded nondeterminism of CSP [Hoare 1978] and Concurrent Processes [Milne and Milner 1979] (see denotational semantics). Roscoe [2005] has developed a denotational semantics with unbounded nondeterminism for a subsequent version of Communicating Sequential Processes Hoare [1985]. More recently Carl Hewitt [2006b] developed a denotational semantics for Actors based on timed diagrams. Ugo Montanari and Carolyn Talcott [1998] have contributed to attempting to reconcile Actors with process calculi. == References == Carl Hewitt, Peter Bishop and Richard Steiger. A Universal Modular Actor Formalism for Artificial Intelligence IJCAI 1973. Robin Milner. Processes: A Mathematical Model of Computing Agents in Logic Colloquium 1973. Irene Greif and Carl Hewitt. Actor Semantics of PLANNER-73 Conference Record of ACM Symposium on Principles of Programming Languages. January 1975. Irene Greif. Semantics of Communicating Parallel Processes MIT EECS Doctoral Dissertation. August 1975. Gordon Plotkin. A powerdomain construction SIAM Journal on Computing September 1976. Carl Hewitt and Henry Baker Actors and Continuous Functionals Proceeding of IFIP Working Conference on Formal Description of Programming Concepts. August 1–5, 1977. Gilles Kahn and David MacQueen. Coroutines and networks of parallel processes IFIP. 1977 Aki Yonezawa Specification and Verification Techniques for Parallel Programs Based on Message Passing Semantics MIT EECS Doctoral Dissertation. December 1977. Michael Smyth. Power domains Journal of Computer and System Sciences. 1978. George Milne and Robin Milner. Concurrent processes and their syntax JACM. April, 1979. CAR Hoare. Communicating Sequential Processes CACM. August, 1978. Nissim Francez, C.A.R. Hoare, Daniel Lehmann, and Willem de Roever. Semantics of nondeterminism, concurrency, and communication Journal of Computer and System Sciences. December 1979. Mathew Hennessy and Robin Milner. On Observing Nondeterminism and Concurrency LNCS 85. 1980. Will Clinger. Foundations of Actor Semantics MIT Mathematics Doctoral Dissertation. June 1981. Mathew Hennessy. A Term Model for Synchronous Processes Computer Science Dept. Edinburgh University. CSR-77-81. 1981. J.A. Bergstra and J.W. Klop. Process algebra for synchronous communication Information and Control. 1984. Luca Cardelli. An implementation model of rendezvous communication Seminar on Concurrency. Lecture Notes in Computer Science 197. Springer-Verlag. 1985 Robert van Glabbeek. Bounded nondeterminism and the approximation induction principle in process algebra Symposium on Theoretical Aspects of Computer Sciences on STACS 1987. K. Mani Chandy and Jayadev Misra. Parallel Program Design: A Foundation Addison-Wesley 1988. Robin Milner, Joachim Parrow and David Walker. A calculus of mobile processes Computer Science Dept. Edinburgh. Reports ECS-LFCS-89-85 and ECS-LFCS-89-86. June 1989. Revised Sept. 1990 and Oct. 1990 respectively. Robin Milner. The Polyadic pi-Calculus: A Tutorial Edinburgh University. LFCS report ECS-LFCS-91-180. 1991. Kohei Honda and Mario Tokoro. An Object Calculus for Asynchronous Communication ECOOP 91. José Meseguer. Conditional rewriting logic as a unified model of concurrency in Selected papers of the Second Workshop on Concurrency and compositionality. 1992. Frederick Knabe. A Distributed Protocol for Channel-Based Communication with Choice PARLE 1992. Geoff Barrett. Occam 3 reference manual INMOS. 1992. Benjamin Pierce, Didier Rémy and David Turner. A typed higher-order programming language based on the pi-calculus Workshop on type Theory and its application to computer Systems. Kyoto University. July 1993. Milner, Robin (January 1993), "Elements of interaction: Turing award lecture", Communications of the ACM, 36, CACM: 78–89, doi:10.1145/151233.151240. R. Amadio and S. Prasad. Locations and failures Foundations of Software Technology and Theoretical Computer Science Conference. 1994. Cédric Fournet and Georges Gonthier. The reflexive chemical abstract machine and the join-calculus POPL 1996. Cédric Fournet, Georges Gonthier, Jean-Jacques Lévy, Luc Maranget, and Didier Rémy. A Calculus of Mobile Agents CONCUR 1996. Tatsurou Sekiguchi and Akinori Yonezawa. A Calculus with Code Mobility FMOODS 1997. Gaspari, Mauro; Zavattaro, Gianluigi (May 1997), An Algebra of Actors (Technical Report), University of Bologna Cardelli, Luca; Gordon, Andrew D. (1998), "Mobile ambients", Foundations of Software Science and Computation Structures, Lecture Notes in Computer Science, vol. 1378, pp. 140–155, doi:10.1007/BFb0053547, ISBN 978-3-540-64300-5 Ugo Montanari and Carolyn Talcott. Can Actors and Pi-Agents Live Together? Electronic Notes in Theoretical Computer Science. 1998. Robin Milner. Communicating and Mobile Systems: the Pi-Calculus Cambridge University Press. 1999. Gaspari, Mauro; Zavattaro, Gianluigi (1999), "An Algebra of Actors", Formal Methods for Open Object-Based Distributed Systems, pp. 3–18, doi:10.1007/978-0-387-35562-7_2, ISBN 978-1-4757-5266-3 Davide Sangiorgi and David Walker. The Pi-Calculus : A Theory of Mobile Processes Cambridge University Press. 2001. P. Thati, R. Ziaei, and G. Agha. A theory of may testing for asynchronous calculi with locality and no name matching Algebraic Methodology and Software Technology. Springer Verlag. September 2002. LNCS 2422. Agha, Gul; Thati, Prasanna (2004), "An Algebraic Theory of Actors and Its Application to a Simple Object-Based Language", From Object-Orientation to Formal Methods (PDF), Lecture Notes in Computer Science, vol. 2635, pp. 26–57, doi:10.1007/978-3-540-39993-3_4, ISBN 978-3-540-21366-6, archived from the original (PDF) on 2004-04-20, retrieved 2005-12-15 J.C.M. Baeten, T. Basten, and M.A. Reniers. Algebra of Communicating Processes Cambridge University Press. 2005. He Jifeng and C.A.R. Hoare. Linking Theories of Concurrency United Nations University International Institute for Software Technology UNU-IIST Report No. 328. July, 2005. Luca Aceto and Andrew D. Gordon (editors). Algebraic Process Calculi: The First Twenty Five Years and Beyond Process Algebra. Bertinoro, Forl`ı, Italy, August 1–5, 2005. Roscoe, A. W. (2005), The Theory and Practice of Concurrency, Prentice Hall, ISBN 978-0-13-674409-2 Carl Hewitt (2006b) What is Commitment? Physical, Organizational, and Social COIN@AAMAS. 2006.
Wikipedia/Actor_model_and_process_calculi
The calculus of communicating systems (CCS) is a process calculus introduced by Robin Milner around 1980 and the title of a book describing the calculus. Its actions model indivisible communications between exactly two participants. The formal language includes primitives for describing parallel composition, summation between actions and scope restriction. CCS is useful for evaluating the qualitative correctness of properties of a system such as deadlock or livelock. According to Milner, "There is nothing canonical about the choice of the basic combinators, even though they were chosen with great attention to economy. What characterises our calculus is not the exact choice of combinators, but rather the choice of interpretation and of mathematical framework". The expressions of the language are interpreted as a labelled transition system. Between these models, bisimilarity is used as a semantic equivalence. == Syntax == Given a set of action names, the set of CCS processes is defined by the following BNF grammar: P ::= 0 | a . P 1 | A | P 1 + P 2 | P 1 | P 2 | P 1 [ b / a ] | P 1 ∖ a {\displaystyle P::=0\,\,\,|\,\,\,a.P_{1}\,\,\,|\,\,\,A\,\,\,|\,\,\,P_{1}+P_{2}\,\,\,|\,\,\,P_{1}|P_{2}\,\,\,|\,\,\,P_{1}[b/a]\,\,\,|\,\,\,P_{1}{\backslash }a\,\,\,} The parts of the syntax are, in the order given above inactive process the inactive process 0 {\displaystyle 0} is a valid CCS process action the process a . P 1 {\displaystyle a.P_{1}} can perform an action a {\displaystyle a} and continue as the process P 1 {\displaystyle P_{1}} process identifier write A = d e f P 1 {\displaystyle A{\overset {\underset {\mathrm {def} }{}}{=}}P_{1}} to use the identifier A {\displaystyle A} to refer to the process P 1 {\displaystyle P_{1}} (which may contain the identifier A {\displaystyle A} itself, i.e., recursive definitions are allowed) summation the process P 1 + P 2 {\displaystyle P_{1}+P_{2}} can proceed either as the process P 1 {\displaystyle P_{1}} or the process P 2 {\displaystyle P_{2}} parallel composition P 1 | P 2 {\displaystyle P_{1}|P_{2}} tells that processes P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} exist simultaneously renaming P 1 [ b / a ] {\displaystyle P_{1}[b/a]} is the process P 1 {\displaystyle P_{1}} with all actions named a {\displaystyle a} renamed as b {\displaystyle b} restriction P 1 ∖ a {\displaystyle P_{1}{\backslash }a} is the process P 1 {\displaystyle P_{1}} without action a {\displaystyle a} == Related calculi, models, and languages == Communicating sequential processes (CSP), developed by Tony Hoare, is a formal language that arose at a similar time to CCS. The Algebra of Communicating Processes (ACP) was developed by Jan Bergstra and Jan Willem Klop in 1982, and uses an axiomatic approach (in the style of Universal algebra) to reason about a similar class of processes as CCS. The pi-calculus, developed by Robin Milner, Joachim Parrow, and David Walker in the late 80's extends CCS with mobility of communication links, by allowing processes to communicate the names of communication channels themselves. PEPA, developed by Jane Hillston introduces activity timing in terms of exponentially distributed rates and probabilistic choice, allowing performance metrics to be evaluated. Reversible Communicating Concurrent Systems (RCCS) introduced by Vincent Danos, Jean Krivine, and others, introduces (partial) reversibility in the execution of CCS processes. Some other languages based on CCS: Calculus of broadcasting systems Language Of Temporal Ordering Specification (LOTOS) Process Calculus for Spatially-Explicit Ecological Models (PALPS) is an extension of CCS with probabilistic choice, locations and attributes for locations Java Orchestration Language Interpreter Engine (Jolie) Models that have been used in the study of CCS-like systems: History monoid Actor model == References == Robin Milner: A Calculus of Communicating Systems, Springer Verlag, ISBN 0-387-10235-3. 1980. Robin Milner, Communication and Concurrency, Prentice Hall, International Series in Computer Science, ISBN 0-13-115007-3. 1989
Wikipedia/Calculus_of_Communicating_Systems
The join-calculus is a process calculus developed at INRIA. The join-calculus was developed to provide a formal basis for the design of distributed programming languages, and therefore intentionally avoids communications constructs found in other process calculi, such as rendezvous communications, which are difficult to implement in a distributed setting. Despite this limitation, the join-calculus is as expressive as the full π-calculus. Encodings of the π-calculus in the join-calculus, and vice versa, have been demonstrated. The join-calculus is a member of the π-calculus family of process calculi, and can be considered, at its core, an asynchronous π-calculus with several strong restrictions: Scope restriction, reception, and replicated reception are syntactically merged into a single construct, the definition; Communication occurs only on defined names; For every defined name there is exactly one replicated reception. However, as a language for programming, the join-calculus offers at least one convenience over the π-calculus — namely the use of multi-way join patterns, the ability to match against messages from multiple channels simultaneously. == Implementations == === Languages based on the join-calculus === The join-calculus programming language is a new language based on the join-calculus process calculus. It is implemented as an interpreter written in OCaml, and supports statically typed distributed programming, transparent remote communication, agent-based mobility, and some failure-detection. Though not explicitly based on join-calculus, the rule system of CLIPS implements it if every rule deletes its inputs when triggered (retracts the relevant facts when fired). Many implementations of the join-calculus were made as extensions of existing programming languages: JoCaml is a version of OCaml extended with join-calculus primitives Polyphonic C# and its successor Cω extend C# MC# and Parallel C# extend Polyphonic C# Join Java extends Java A Concurrent Basic proposal that uses Join-calculus JErlang (the J is for Join, erjang is Erlang for the JVM) === Embeddings in other programming languages === These implementations do not change the underlying programming language but introduce join calculus operations through a custom library or DSL: The ScalaJoins and the Chymyst libraries are in Scala JoinHs by Einar Karttunen and syallop/Join-Language by Samuel Yallop are DSLs for Join calculus in Haskell Joinads - various implementations of join calculus in F# CocoaJoin is an experimental implementation in Objective-C for iOS and Mac OS X The Join Python library in Python 3 C++ via Boost (for boost from 2009, ca. v. 40, current (Dec '19) is 72). == References == == External links == INRIA, Join Calculus homepage Microsoft Research, The Join Calculus: a Language for Distributed Mobile Programming
Wikipedia/Join-calculus
The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization). The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi. == History == According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced by the programming languages Lisp, Simula, early versions of Smalltalk, capability-based systems, and packet switching. Its development was "motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network." Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model. Following Hewitt, Bishop, and Steiger's 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research. Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems. Other major milestones include William Clinger's 1981 dissertation introducing a denotational semantics based on power domains and Gul Agha's 1985 dissertation which further developed a transition-based semantic model complementary to Clinger's. This resulted in the full development of actor model theory. Major software implementation work was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn, Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group at Massachusetts Institute of Technology (MIT). Research groups led by Chuck Seitz at California Institute of Technology (Caltech) and Bill Dally at MIT constructed computer architectures that further developed the message passing in the model. See Actor model implementation. Research on the actor model has been carried out at California Institute of Technology, Kyoto University Tokoro Laboratory, Microelectronics and Computer Technology Corporation (MCC), MIT Artificial Intelligence Laboratory, SRI, Stanford University, University of Illinois at Urbana–Champaign, Pierre and Marie Curie University (University of Paris 6), University of Pisa, University of Tokyo Yonezawa Laboratory, Centrum Wiskunde & Informatica (CWI) and elsewhere. == Fundamental concepts == The actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages. An actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other actors; create a finite number of new actors; designate the behavior to be used for the next message it receives. There is no assumed sequence to the above actions and they could be carried out in parallel. Decoupling the sender from communications sent was a fundamental advance of the actor model enabling asynchronous communication and control structures as patterns of passing messages. Recipients of messages are identified by address, sometimes called "mailing address". Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created. The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order. == Formal systems == Over the years, several different formal systems have been developed which permit reasoning about systems in the actor model. These include: Operational semantics Laws for actor systems Denotational semantics Transition semantics There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (See Attempts to relate actor semantics to algebra and linear logic): Several different actor algebras Linear logic == Applications == The actor model can be used as a framework for modeling, understanding, and reasoning about a wide range of concurrent systems. For example: Electronic mail (email) can be modeled as an actor system. Accounts are modeled as actors and email addresses as actor addresses. Web services can be modeled as actors with Simple Object Access Protocol (SOAP) endpoints modeled as actor addresses. Objects with locks (e.g., as in Java and C#) can be modeled as a serializer, provided that their implementations are such that messages can continually arrive (perhaps by being stored in an internal queue). A serializer is an important kind of actor defined by the property that it is continually available to the arrival of new messages; every message sent to a serializer is guaranteed to arrive. Testing and Test Control Notation (TTCN), both TTCN-2 and TTCN-3, follows actor model rather closely. In TTCN actor is a test component: either parallel test component (PTC) or main test component (MTC). Test components can send and receive messages to and from remote partners (peer test components or test system interface), the latter being identified by its address. Each test component has a behaviour tree bound to it; test components run in parallel and can be dynamically created by parent test components. Built-in language constructs allow the definition of actions to be taken when an expected message is received from the internal message queue, like sending a message to another peer entity or creating new test components. == Message-passing semantics == The actor model is about the semantics of message passing. === Unbounded nondeterminism controversy === Arguably, the first concurrent programs were interrupt handlers. During the course of its normal operation a computer needed to be able to receive information from outside (characters from a keyboard, packets from a network, etc). So when the information arrived the execution of the computer was interrupted and special code (called an interrupt handler) was called to put the information in a data buffer where it could be subsequently retrieved. In the early 1960s, interrupts began to be used to simulate the concurrent execution of several programs on one processor. Having concurrency with shared memory gave rise to the problem of concurrency control. Originally, this problem was conceived as being one of mutual exclusion on a single computer. Edsger Dijkstra developed semaphores and later, between 1971 and 1973, Tony Hoare and Per Brinch Hansen developed monitors to solve the mutual exclusion problem. However, neither of these solutions provided a programming language construct that encapsulated access to shared resources. This encapsulation was later accomplished by the serializer construct ([Hewitt and Atkinson 1977, 1979] and [Atkinson 1980]). The first models of computation (e.g., Turing machines, Post productions, the lambda calculus, etc.) were based on mathematics and made use of a global state to represent a computational step (later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] see Event orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued in automata theory for finite-state machines and push down stack machines, including their nondeterministic versions. Such nondeterministic automata have the property of bounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts. Edsger Dijkstra further developed the nondeterministic global state approach. Dijkstra's model gave rise to a controversy concerning unbounded nondeterminism (also called unbounded indeterminacy), a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra's model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism. Hewitt argued otherwise: there is no bound that can be placed on how long it takes a computational circuit called an arbiter to settle (see metastability (electronics)). Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with respect to input from outside, e.g., keyboard input, disk access, network input, etc. So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states. The actor model features unbounded nondeterminism which was captured in a mathematical model by Will Clinger using domain theory. In the actor model, there is no global state. === Direct communication and asynchrony === Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the "ether" or the "environment". Also, messages in the actor model are simply sent (like packets in IP); there is no requirement for a synchronous handshake with the recipient. === Actor creation plus addresses in messages means variable topology === A natural development of the actor model was to allow addresses in messages. Influenced by packet switched networks [1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication. For example, an actor might need to send a message to a recipient actor from which it later expects to receive a response, but the response will actually be handled by a third actor component that has been configured to receive and handle the response (for example, a different actor implementing the observer pattern). The original actor could accomplish this by sending a communication that includes the message it wishes to send, along with the address of the third actor that will handle the response. This third actor that will handle the response is called the resumption (sometimes also called a continuation or stack frame). When the recipient actor is ready to send a response, it sends the response message to the resumption actor address that was included in the original communication. So, the ability of actors to create new actors with which they can exchange communications, along with the ability to include the addresses of other actors in messages, gives actors the ability to create and participate in arbitrarily variable topological relationships with one another, much as the objects in Simula and other object-oriented languages may also be relationally composed into variable topologies of message-exchanging objects. === Inherently concurrent === As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained in actor model theory. === No requirement on order of message arrival === Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved in FIFO order. So if an actor X sent a message M1 to an actor Y, and later X sent another message M2 to Y, there is no requirement that M1 arrives at Y before M2. In this respect the actor model mirrors packet switching systems which do not guarantee that packets must be received in the order sent. Not providing the order of delivery guarantee allows packet switching to buffer packets, use multiple paths to send packets, resend damaged packets, and to provide other optimizations. For more example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing. Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining. Of course, it is possible to perform the pipeline optimization incorrectly in some implementations, in which case unexpected behavior may occur. === Locality === Another important characteristic of the actor model is locality. Locality means that in processing a message, an actor can send messages only to addresses that it receives in the message, addresses that it already had before it received the message, and addresses for actors that it creates while processing the message. (But see Synthesizing addresses of actors.) Also locality means that there is no simultaneous change in multiple locations. In this way it differs from some other models of concurrency, e.g., the Petri net model in which tokens are simultaneously removed from multiple locations and placed in other locations. === Composing actor systems === The idea of composing actor systems into larger ones is an important aspect of modularity that was developed in Gul Agha's doctoral dissertation, developed later by Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott. === Behaviors === A key innovation was the introduction of behavior specified as a mathematical function to express what an actor does when it processes a message, including specifying a new behavior to process the next message that arrives. Behaviors provided a mechanism to mathematically model the sharing in concurrency. Behaviors also freed the actor model from implementation details, e.g., the Smalltalk-72 token stream interpreter. However, the efficient implementation of systems described by the actor model require extensive optimization. See Actor model implementation for details. === Modeling other concurrency systems === Other concurrency systems (e.g., process calculi) can be modeled in the actor model using a two-phase commit protocol. === Computational Representation Theorem === There is a Computational Representation Theorem in the actor model for systems which are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed system S {\displaystyle {\mathtt {S}}} is constructed from an initial behavior ⊥ S {\displaystyle \bot _{\mathtt {S}}} and a behavior-approximating function p r o g r e s s i o n S . {\displaystyle \mathbf {progression} _{\mathtt {S}}.} These obtain increasingly better approximations and construct a denotation (meaning) for S {\displaystyle {\mathtt {S}}} as follows [Hewitt 2008; Clinger 1981]: D e n o t e S ≡ lim i → ∞ p r o g r e s s i o n S i ( ⊥ S ) {\displaystyle \mathbf {Denote} _{\mathtt {S}}\equiv \lim _{i\to \infty }\mathbf {progression} _{{\mathtt {S}}^{i}}(\bot _{\mathtt {S}})} In this way, S {\displaystyle {\mathtt {S}}} can be mathematically characterized in terms of all its possible behaviors (including those involving unbounded nondeterminism). Although D e n o t e S {\displaystyle \mathbf {Denote} _{\mathtt {S}}} is not an implementation of S {\displaystyle {\mathtt {S}}} , it can be used to prove a generalization of the Church-Turing-Rosser-Kleene thesis [Kleene 1943]: A consequence of the above theorem is that a finite actor can nondeterministically respond with an uncountable number of different outputs. === Relationship to logic programming === One of the key motivations for the development of the actor model was to understand and deal with the control structure issues that arose in development of the Planner programming language. Once the actor model was initially defined, an important challenge was to understand the power of the model relative to Robert Kowalski's thesis that "computation can be subsumed by deduction". Hewitt argued that Kowalski's thesis turned out to be false for the concurrent computation in the actor model (see Indeterminacy in concurrent computation). Nevertheless, attempts were made to extend logic programming to concurrent computation. However, Hewitt and Agha [1991] claimed that the resulting systems were not deductive in the following sense: computational steps of the concurrent logic programming systems do not follow deductively from previous steps (see Indeterminacy in concurrent computation). Recently, logic programming has been integrated into the actor model in a way that maintains logical semantics. === Migration === Migration in the actor model is the ability of actors to change locations. E.g., in his dissertation, Aki Yonezawa modeled a post office that customer actors could enter, change locations within while operating, and exit. An actor that can migrate can be modeled by having a location actor that changes when the actor migrates. However the faithfulness of this modeling is controversial and the subject of research. === Security === The security of actors can be protected in the following ways: hardwiring in which actors are physically connected computer hardware as in Burroughs B5000, Lisp machine, etc. virtual machines as in Java virtual machine, Common Language Runtime, etc. operating systems as in capability-based systems signing and/or encryption of actors and their addresses === Synthesizing addresses of actors === A delicate point in the actor model is the ability to synthesize the address of an actor. In some cases security can be used to prevent the synthesis of addresses (see Security). However, if an actor address is simply a bit string then clearly it can be synthesized although it may be difficult or even infeasible to guess the address of an actor if the bit strings are long enough. SOAP uses a URL for the address of an endpoint where an actor can be reached. Since a URL is a character string, it can clearly be synthesized although encryption can make it virtually impossible to guess. Synthesizing the addresses of actors is usually modeled using mapping. The idea is to use an actor system to perform the mapping to the actual actor addresses. For example, on a computer the memory structure of the computer can be modeled as an actor system that does the mapping. In the case of SOAP addresses, it's modeling the DNS and the rest of the URL mapping. === Contrast with other models of message-passing concurrency === Robin Milner's initial published work on concurrency was also notable in that it was not based on composing sequential processes. His work differed from the actor model because it was based on a fixed number of processes of fixed topology communicating numbers and strings using synchronous communication. The original communicating sequential processes (CSP) model published by Tony Hoare differed from the actor model because it was based on the parallel composition of a fixed number of sequential processes connected in a fixed topology, and communicating using synchronous message-passing based on process names (see Actor model and process calculi history). Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner's work on the calculus of communicating systems (CCS) and the π-calculus. These early models by Milner and Hoare both had the property of bounded nondeterminism. Modern, theoretical CSP ([Hoare 1985] and [Roscoe 2005]) explicitly provides unbounded nondeterminism. Petri nets and their extensions (e.g., coloured Petri nets) are like actors in that they are based on asynchronous message passing and unbounded nondeterminism, while they are like early CSP in that they define fixed topologies of elementary processing steps (transitions) and message repositories (places). == Influence == The actor model has been influential on both theory development and practical software development. === Theory === The actor model has influenced the development of the π-calculus and subsequent process calculi. In his Turing lecture, Robin Milner wrote: Now, the pure lambda-calculus is built with just two kinds of thing: terms and variables. Can we achieve the same economy for a process calculus? Carl Hewitt, with his actors model, responded to this challenge long ago; he declared that a value, an operator on values, and a process should all be the same kind of thing: an actor. This goal impressed me, because it implies the homogeneity and completeness of expression ... But it was long before I could see how to attain the goal in terms of an algebraic calculus... So, in the spirit of Hewitt, our first step is to demand that all things denoted by terms or accessed by names—values, registers, operators, processes, objects—are all of the same kind of thing; they should all be processes. === Practice === The actor model has had extensive influence on commercial practice. For example, Twitter has used actors for scalability. Also, Microsoft has used the actor model in the development of its Asynchronous Agents Library. There are multiple other actor libraries listed in the actor libraries and frameworks section below. == Addressed issues == According to Hewitt [2006], the actor model addresses issues in computer and communications architecture, concurrent programming languages, and Web services including the following: Scalability: the challenge of scaling up concurrency both locally and nonlocally. Transparency: bridging the chasm between local and nonlocal concurrency. Transparency is currently a controversial issue. Some researchers have advocated a strict separation between local concurrency using concurrent programming languages (e.g., Java and C#) from nonlocal concurrency using SOAP for Web services. Strict separation produces a lack of transparency that causes problems when it is desirable/necessary to change between local and nonlocal access to Web services (see Distributed computing). Inconsistency: inconsistency is the norm because all large knowledge systems about human information system interactions are inconsistent. This inconsistency extends to the documentation and specifications of large systems (e.g., Microsoft Windows software, etc.), which are internally inconsistent. Many of the ideas introduced in the actor model are now also finding application in multi-agent systems for these same reasons [Hewitt 2006b 2007b]. The key difference is that agent systems (in most definitions) impose extra constraints upon the actors, typically requiring that they make use of commitments and goals. == Programming with actors == A number of different programming languages employ the actor model or some variation of it. These languages include: === Early actor programming languages === Act 1, 2 and 3 Acttalk Ani Cantor Rosette === Later actor programming languages === === Actor libraries and frameworks === Actor libraries or frameworks have also been implemented to permit actor-style programming in languages that don't have actors built-in. Some of these frameworks are: == See also == Autonomous agent Data flow Gordon Pask Input/output automaton Scientific community metaphor == References == == Further reading == == External links == Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask) Microsoft Channel 9. April 9, 2012. Video on YouTube Functional Java Archived 2011-07-09 at the Wayback Machine – a Java library that includes an implementation of concurrent actors with code examples in standard Java and Java 7 BGGA style. ActorFoundry – a Java-based library for actor programming. The familiar Java syntax, an ant build file and a bunch of example make the entry barrier low. ActiveJava – a prototype Java language extension for actor programming. Akka – actor based library in Scala and Java, from Lightbend Inc. GPars – a concurrency library for Apache Groovy and Java Asynchronous Agents Library – Microsoft actor library for Visual C++. "The Agents Library is a C++ template library that promotes an actor-based programming model and in-process message passing for coarse-grained dataflow and pipelining tasks. " ActorThread in C++11 – base template providing the gist of the actor model over naked threads in standard C++11
Wikipedia/Actor_model
In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible. Computer systems, both software and hardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (in memory or storage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, and theories to maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in a concurrent algorithm compared to the simpler sequential algorithm. For example, a failure in concurrency control can result in data corruption from torn read or write operations. == Concurrency control in databases == Comments: This section is applicable to all transactional systems, i.e., to all systems that use database transactions (atomic transactions; e.g., transactional objects in Systems management and in networks of smartphones which typically implement private, dedicated database systems), not only general-purpose database management systems (DBMSs). DBMSs need to deal also with concurrency control issues not typical just to database transactions but rather to operating systems in general. These issues (e.g., see Concurrency control in operating systems below) are out of the scope of this section. Concurrency control in Database management systems (DBMS; e.g., Bernstein et al. 1987, Weikum and Vossen 2001), other transactional objects, and related distributed applications (e.g., Grid computing and Cloud computing) ensures that database transactions are performed concurrently without violating the data integrity of the respective databases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency control theory for database systems is outlined in the references mentioned above: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions over abstract data types is presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis and insight. To some extent they are complementary, and their merging may be useful. To ensure correctness, a DBMS usually guarantees that only serializable transaction schedules are generated, unless serializability is intentionally relaxed to increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have the recoverability (from abort) property. A DBMS also guarantees that no effect of committed transactions is lost, and no effect of aborted (rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by the ACID rules below. As databases have become distributed, or needed to cooperate in distributed environments (e.g., Federated databases in the early 1990, and Cloud computing currently), the effective distribution of concurrency control mechanisms has received special attention. === Database transaction and the ACID rules === The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs): Atomicity - Either the effects of all or none of its operations remain ("all or nothing" semantics) when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears (by its effects on the database) to be indivisible (atomic), and an aborted transaction does not affect the database at all. Either all the operations are done or none of them are. Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state (however, it is the responsibility of the transaction's programmer to make sure that the transaction itself is correct, i.e., performs correctly what it intends to perform (from the application's point of view) while the predefined integrity rules are enforced by the DBMS). Thus since a database can be normally changed only by transactions, all the database's states are consistent. Isolation - Transactions cannot interfere with each other (as an end result of their executions). Moreover, usually (depending on concurrency control method) the effects of an incomplete transaction are not even visible to another transaction. Providing isolation is the main goal of concurrency control. Durability - Effects of successful (committed) transactions must persist through crashes (typically by recording the transaction's effects and its commit event in a non-volatile memory). The concept of atomic transaction has been extended during the years to what has become Business transactions which actually implement types of Workflow and are not atomic. However also such enhanced transactions typically utilize atomic transactions as components. === Why is concurrency control needed? === If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as: The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a first value written by a first concurrent transaction, and the first value is lost to other transactions running concurrently which need, by their precedence, to read the first value. The transactions that have read the wrong value end with incorrect results. The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This value disappears from the database upon abort, and should not have been read by any transaction ("dirty read"). The reading transactions end with incorrect results. The incorrect summary problem: While one transaction takes a summary over the values of all the instances of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary does not reflect a correct result for any (usually needed for correctness) precedence order between the two transactions (if one is executed before the other), but rather some random result, depending on the timing of the updates, and whether certain update results have been included in the summary or not. Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently. === Concurrency control mechanisms === ==== Categories ==== The main categories of concurrency control mechanisms are: Optimistic - Allow transactions to proceed without blocking any of their (read, write) operations ("...and be optimistic about the rules being met..."), and only check for violations of the desired integrity rules (e.g., serializability and recoverability) at each transaction's commit. If violations are detected upon a transaction's commit, the transaction is aborted and restarted. This approach is very efficient when few transactions are aborted. Pessimistic - Block an operation of a transaction, if it may cause violation of the rules (e.g., serializability and recoverability), until the possibility of violation disappears. Blocking operations is typically involved with performance reduction. Semi-optimistic - Responds pessimistically or optimistically depending on the type of violation and how quickly it can be detected. Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance. The mutual blocking between two transactions (where each one blocks the other) or more results in a deadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low. Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories. ==== Methods ==== Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods, which have each many variants, and in some cases may overlap or be combined, are: Locking (e.g., Two-phase locking - 2PL) - Controlling access to data by locks assigned to the data. Access of a transaction to a data item (database object) locked by another transaction may be blocked (depending on lock type and access operation type) until lock release. Serialization graph checking (also called Serializability, or Conflict, or Precedence graph checking) - Checking for cycles in the schedule's graph and breaking them by aborts. Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order. Other major concurrency control types that are utilized in conjunction with the methods above include: Multiversion concurrency control (MVCC) - Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method. Index concurrency control - Synchronizing access operations to indexes, rather than to user data. Specialized methods provide substantial performance gains. Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and Vossen 2001). This model provides a different concurrency control behavior with benefits in many cases. The most common mechanism type in database systems since their early days in the 1970s has been Strong strict Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of Two-phase locking (2PL). It is pessimistic. In spite of its long name (for historical reasons) the idea of the SS2PL mechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these SS2PL (or Rigorous) schedules have the SS2PL (or Rigorousness) property. === Major goals of concurrency control mechanisms === Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions are distributed over processes, computers, and computer networks. Other subjects that may affect concurrency control are recovery and replication. ==== Correctness ==== ===== Serializability ===== For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the Serializability property. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere. Serializability of a schedule means equivalence (in the resulting database values) to some serial schedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level of isolation among database transactions, and the major correctness criterion for concurrent transactions. In some cases compromised, relaxed forms of serializability are allowed for better performance (e.g., the popular Snapshot isolation mechanism) or to meet availability requirements in highly distributed systems (see Eventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed for money transactions, since by relaxation money can disappear, or appear from nowhere). Almost all implemented concurrency control mechanisms achieve serializability by providing Conflict serializability, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently. ===== Recoverability ===== See Recoverability in Serializability Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons). Recoverability (from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability is Strictness, which allows efficient database recovery from failure (but excludes optimistic implementations. ==== Distribution ==== With the fast technological development of computing the difference between local and distributed computing over low latency networks or buses is blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., in computer clusters and multi-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well. ===== Recovery ===== All systems are prone to failures, and handling recovery from failure is a must. The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery. For example, the Strictness property (mentioned in the section Recoverability above) is often desirable for an efficient recovery. ===== Replication ===== For high availability database objects are often replicated. Updates of replicas of a same database object need to be kept synchronized. This may affect the way concurrency control is done (e.g., Gray et al. 1996). == Concurrency control in operating systems == Multitasking operating systems, especially real-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing is to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such as deadlock. Other solutions are Non-blocking algorithms and Read-copy-update. == See also == Linearizability – Property of some operation(s) in concurrent programming Lock (computer science) – Synchronization mechanism for enforcing limits on access to a resource Mutual exclusion – In computing, restricting data to be accessible by one thread at a time Search engine indexing – Method for data management Semaphore (programming) – Variable used in a concurrent system Software transactional memory – Concurrency control mechanism in software Transactional Synchronization Extensions – Extension to the x86 instruction set architecture that adds hardware transactional memory support Database transaction schedule Isolation (computer science) Distributed concurrency control == References == Andrew S. Tanenbaum, Albert S Woodhull (2006): Operating Systems Design and Implementation, 3rd Edition, Prentice Hall, ISBN 0-13-142938-8 Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts, 8th edition. John Wiley & Sons. ISBN 978-0-470-12872-5. Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems (free PDF download), Addison Wesley Publishing Company, 1987, ISBN 0-201-10715-5 Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, ISBN 1-55860-508-8 Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete (1993): Atomic Transactions in Concurrent and Distributed Systems , Morgan Kaufmann (Elsevier), August 1993, ISBN 978-1-55860-104-8, ISBN 1-55860-104-X Yoav Raz (1992): "The Principle of Commitment Ordering, or Guaranteeing Serializability in a Heterogeneous Environment of Multiple Autonomous Resource Managers Using Atomic Commitment." (PDF), Proceedings of the Eighteenth International Conference on Very Large Data Bases (VLDB), pp. 292-312, Vancouver, Canada, August 1992. (also DEC-TR 841, Digital Equipment Corporation, November 1990) == Citations ==
Wikipedia/Concurrency_control
Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study. == Basic idea == In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A. === Arity === An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type Ω {\displaystyle \Omega } , where Ω {\displaystyle \Omega } is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as ⋀ α ∈ J x α {\displaystyle \textstyle \bigwedge _{\alpha \in J}x_{\alpha }} where J is an infinite index set, which is an operation in the algebraic theory of complete lattices. === Equations === After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A. == Varieties == A collection of algebraic structures defined by identities is called a variety or equational class. Restricting one's study to varieties rules out: quantification, including universal quantification (∀) except before an equation, and existential quantification (∃) logical connectives other than conjunction (∧) relations other than equality, in particular inequalities, both a ≠ b and order relations The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only. Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope. The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type). One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces. === Examples === Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities. ==== Groups ==== As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms: Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z. Identity element: There exists an element e such that for each element x, one has e ∗ x = x = x ∗ e; formally: ∃e ∀x. e∗x=x=x∗e. Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x. (Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.) This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become: Associativity: x ∗ (y ∗ z) = (x ∗ y) ∗ z. Identity element: e ∗ x = x = x ∗ e; formally: ∀x. e∗x=x=x∗e. Inverse element: x ∗ (~x) = e = (~x) ∗ x; formally: ∀x. x∗~x=e=~x∗x. To summarize, the usual definition has: a single binary operation (signature (2)) 1 equational law (associativity) 2 quantified laws (identity and inverse) while the universal algebra definition has: 3 operations: one binary, one unary, and one nullary (signature (2, 1, 0)) 3 equational laws (associativity, identity, and inverse) no quantified laws (except outermost universal quantifiers, which are allowed in varieties) A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows that it is unique, as is the inverse of each element. The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration). ==== Other examples ==== Most algebraic structures are examples of universal algebras. Rings, semigroups, quasigroups, groupoids, magmas, loops, and others. Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring. Examples of relational algebras include semilattices, lattices, and Boolean algebras. == Basic constructions == We assume that the type, Ω {\displaystyle \Omega } , has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product. A homomorphism between two algebras A and B is a function h : A → B from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1, ..., xn)) = fB(h(x1), ..., h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A). A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise. == Some basic theorems == The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc. Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products. == Motivations and applications == In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras. It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one." In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system. The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids. === Constraint satisfaction problem === Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra A and an existential sentence φ {\displaystyle \varphi } over this algebra, the question is to find out whether φ {\displaystyle \varphi } can be satisfied in A. The algebra A is often fixed, so that CSPA refers to the problem whose instance is only the existential sentence φ {\displaystyle \varphi } . It is proved that every computational problem can be formulated as CSPA for some algebra A. For example, the n-coloring problem can be stated as CSP of the algebra ({0, 1, ..., n−1}, ≠), i.e. an algebra with n elements and a single relation, inequality. == Generalizations == Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages. In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products). A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law gg−1 = 1 duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space. Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as "essentially algebraic theories". Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic". == History == In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.: v  At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities. Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book: "Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge." Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students. In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others. In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others. Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra. == See also == Equational logic Graph algebra Term algebra Clone Universal algebraic geometry Simple algebra (universal algebra) == Footnotes == == References == == External links == Algebra Universalis—a journal dedicated to Universal Algebra.
Wikipedia/Equational_reasoning
The algebra of communicating processes (ACP) is an algebraic approach to reasoning about concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras or process calculi. ACP was initially developed by Jan Bergstra and Jan Willem Klop in 1982, as part of an effort to investigate the solutions of unguarded recursive equations. More so than the other seminal process calculi (CCS and CSP), the development of ACP focused on the algebra of processes, and sought to create an abstract, generalized axiomatic system for processes, and in fact the term process algebra was coined during the research that led to ACP. == Informal description == ACP is fundamentally an algebra, in the sense of universal algebra. This algebra is a way to describe systems in terms of algebraic process expressions that define compositions of other processes, or of certain primitive elements. === Primitives === ACP uses instantaneous, atomic actions ( a , b , c , . . . {\displaystyle {\mathit {a,b,c,...}}} ) as its primitives. Some actions have special meaning, such as the action δ {\displaystyle \delta } , which represents deadlock or stagnation, and the action τ {\displaystyle \tau } , which represents a silent action (abstracted actions that have no specific identity). === Algebraic operators === Actions can be combined to form processes using a variety of operators. These operators can be roughly categorized as providing a basic process algebra, concurrency, and communication. Choice and sequencing – the most fundamental of algebraic operators are the alternative operator ( + {\displaystyle +} ), which provides a choice between actions, and the sequencing operator ( ⋅ {\displaystyle \cdot } ), which specifies an ordering on actions. So, for example, the process ( a + b ) ⋅ c {\displaystyle (a+b)\cdot c} first chooses to perform either a {\displaystyle {\mathit {a}}} or b {\displaystyle {\mathit {b}}} , and then performs action c {\displaystyle {\mathit {c}}} . How the choice between a {\displaystyle {\mathit {a}}} and b {\displaystyle {\mathit {b}}} is made does not matter and is left unspecified. Note that alternative composition is commutative but sequential composition is not (because time flows forward). Concurrency – to allow the description of concurrency, ACP provides the merge and left-merge operators. The merge operator, | | {\displaystyle \vert \vert } , represents the parallel composition of two processes, the individual actions of which are interleaved. The left-merge operator, | ⌊ {\displaystyle \vert \lfloor } , is an auxiliary operator with similar semantics to the merge, but a commitment to always choose its initial step from the left-hand process. As an example, the process ( a ⋅ b ) | | ( c ⋅ d ) {\displaystyle (a\cdot b)\vert \vert (c\cdot d)} may perform the actions a , b , c , d {\displaystyle a,b,c,d} in any of the sequences a b c d , a c b d , a c d b , c a b d , c a d b , c d a b {\displaystyle abcd,acbd,acdb,cabd,cadb,cdab} . On the other hand, the process ( a ⋅ b ) | ⌊ ( c ⋅ d ) {\displaystyle (a\cdot b)\vert \lfloor (c\cdot d)} may only perform the sequences a b c d , a c b d , a c d b {\displaystyle abcd,acbd,acdb} since the left-merge operators ensure that the action a {\displaystyle {\mathit {a}}} occurs first. Communication — interaction (or communication) between processes is represented using the binary communications operator, | {\displaystyle \vert } . For example, the actions r ( d ) {\displaystyle r(d)} and w ( d ) {\displaystyle w(d)} might be interpreted as the reading and writing of a data item d ∈ D = { 1 , 2 , 3 , … } {\displaystyle d\in D=\{1,2,3,\ldots \}} , respectively. Then the process ( ∑ d ∈ D r ( d ) ⋅ y ) | ( w ( 1 ) ⋅ z ) {\displaystyle \left(\sum _{d\in D}r(d)\cdot y\right)\vert (w(1)\cdot z)} will communicate the value 1 {\displaystyle 1} from the right component process to the left component process (i.e. the identifier d {\displaystyle {\mathit {d}}} is bound to the value 1 {\displaystyle 1} , and free instances of d {\displaystyle {\mathit {d}}} in the process y {\displaystyle {\mathit {y}}} take on that value), and then behave as the merge of y {\displaystyle {\mathit {y}}} and z {\displaystyle {\mathit {z}}} . Abstraction — the abstraction operator, τ I {\displaystyle \tau _{I}} , is a way to "hide" certain actions, and treat them as events that are internal to the systems being modelled. Abstracted actions are converted to the silent step action τ {\displaystyle \tau } . In some cases, these silent steps can also be removed from the process expression as part of the abstraction process. For example, τ { c } ( ( a + b ) ⋅ c ) = ( a + b ) ⋅ τ {\displaystyle \tau _{\{c\}}((a+b)\cdot c)=(a+b)\cdot \tau } which, in this case, can be reduced to a + b {\displaystyle a+b} since the event c {\displaystyle {\mathit {c}}} is no longer observable and has no observable effects. == Formal definition == ACP fundamentally adopts an axiomatic, algebraic approach to the formal definition of its various operators. The axioms presented below comprise the full axiomatic system for ACP τ {\displaystyle \tau } (ACP with abstraction). === Basic process algebra === Using the alternative and sequential composition operators, ACP defines a basic process algebra which satisfies the axioms x + y = y + x ( x + y ) + z = x + ( y + z ) x + x = x ( x + y ) ⋅ z = ( x ⋅ z ) + ( y ⋅ z ) ( x ⋅ y ) ⋅ z = x ⋅ ( y ⋅ z ) {\displaystyle {\begin{matrix}x+y&=&y+x\\(x+y)+z&=&x+(y+z)\\x+x&=&x\\(x+y)\cdot z&=&(x\cdot z)+(y\cdot z)\\(x\cdot y)\cdot z&=&x\cdot (y\cdot z)\end{matrix}}} === Deadlock === Beyond the basic algebra, two additional axioms define the relationships between the alternative and sequencing operators, and the deadlock action, δ {\displaystyle \delta } δ + x = x δ ⋅ x = δ {\displaystyle {\begin{matrix}\delta +x&=&x\\\delta \cdot x&=&\delta \end{matrix}}} === Concurrency and interaction === The axioms associated with the merge, left-merge, and communication operators are x | | y = x | ⌊ y + y | ⌊ x + x | y a ⋅ x | ⌊ y = a ⋅ ( x | | y ) a | ⌊ y = a ⋅ y ( x + y ) | ⌊ z = ( x | ⌊ z ) + ( y | ⌊ z ) a ⋅ x | b = ( a | b ) ⋅ x a | b ⋅ x = ( a | b ) ⋅ x a ⋅ x | b ⋅ y = ( a | b ) ⋅ ( x | | y ) ( x + y ) | z = x | z + y | z x | ( y + z ) = x | y + x | z {\displaystyle {\begin{matrix}x\vert \vert y&=&x\vert \lfloor y+y\vert \lfloor x+x\vert y\\a\cdot x\vert \lfloor y&=&a\cdot (x\vert \vert y)\\a\vert \lfloor y&=&a\cdot y\\(x+y)\vert \lfloor z&=&(x\vert \lfloor z)+(y\vert \lfloor z)\\a\cdot x\vert b&=&(a\vert b)\cdot x\\a\vert b\cdot x&=&(a\vert b)\cdot x\\a\cdot x\vert b\cdot y&=&(a\vert b)\cdot (x\vert \vert y)\\(x+y)\vert z&=&x\vert z+y\vert z\\x\vert (y+z)&=&x\vert y+x\vert z\end{matrix}}} When the communications operator is applied to actions alone, rather than processes, it is interpreted as a binary function from actions to actions, | : A × A → A {\displaystyle \vert :A\times A\rightarrow A} . The definition of this function defines the possible interactions between processes — those pairs of actions that do not constitute interactions are mapped to the deadlock action, δ {\displaystyle \delta } , while permitted interaction pairs are mapped to corresponding single actions representing the occurrence of an interaction. For example, the communications function might specify that a | a → c {\displaystyle a\vert a\rightarrow c} which indicates that a successful interaction a | a {\displaystyle a\vert a} will be reduced to the action c {\displaystyle c} . ACP also includes an encapsulation operator, ∂ H {\displaystyle \partial _{H}} for some H ⊆ A {\displaystyle H\subseteq A} , which is used to convert unsuccessful communication attempts (i.e. elements of H {\displaystyle H} that have not been reduced via the communication function) to the deadlock action. The axioms associated with the communications function and encapsulation operator are a | b = b | a ( a | b ) | c = a | ( b | c ) a | δ = δ ∂ H ( a ) = a if a ∉ H ∂ H ( a ) = δ if a ∈ H ∂ H ( x + y ) = ∂ H ( x ) + ∂ H ( y ) ∂ H ( x ⋅ y ) = ∂ H ( x ) ⋅ ∂ H ( y ) {\displaystyle {\begin{matrix}a\vert b&=&b\vert a\\(a\vert b)\vert c&=&a\vert (b\vert c)\\a\vert \delta &=&\delta \\\partial _{H}(a)&=&a{\mbox{ if }}a\notin H\\\partial _{H}(a)&=&\delta {\mbox{ if }}a\in H\\\partial _{H}(x+y)&=&\partial _{H}(x)+\partial _{H}(y)\\\partial _{H}(x\cdot y)&=&\partial _{H}(x)\cdot \partial _{H}(y)\\\end{matrix}}} === Abstraction === The axioms associated with the abstraction operator are τ I ( τ ) = τ τ I ( a ) = a if a ∉ I τ I ( a ) = τ if a ∈ I τ I ( x + y ) = τ I ( x ) + τ I ( y ) τ I ( x ⋅ y ) = τ I ( x ) ⋅ τ I ( y ) ∂ H ( τ ) = τ x ⋅ τ = x τ ⋅ x = τ ⋅ x + x a ⋅ ( τ ⋅ x + y ) = a ⋅ ( τ ⋅ x + y ) + a ⋅ x τ ⋅ x | ⌊ y = τ ⋅ ( x | | y ) τ | ⌊ x = τ ⋅ x τ | x = δ x | τ = δ τ ⋅ x | y = x | y x | τ ⋅ y = x | y {\displaystyle {\begin{matrix}\tau _{I}(\tau )&=&\tau \\\tau _{I}(a)&=&a{\mbox{ if }}a\notin I\\\tau _{I}(a)&=&\tau {\mbox{ if }}a\in I\\\tau _{I}(x+y)&=&\tau _{I}(x)+\tau _{I}(y)\\\tau _{I}(x\cdot y)&=&\tau _{I}(x)\cdot \tau _{I}(y)\\\partial _{H}(\tau )&=&\tau \\x\cdot \tau &=&x\\\tau \cdot x&=&\tau \cdot x+x\\a\cdot (\tau \cdot x+y)&=&a\cdot (\tau \cdot x+y)+a\cdot x\\\tau \cdot x\vert \lfloor y&=&\tau \cdot (x\vert \vert y)\\\tau \vert \lfloor x&=&\tau \cdot x\\\tau \vert x&=&\delta \\x\vert \tau &=&\delta \\\tau \cdot x\vert y&=&x\vert y\\x\vert \tau \cdot y&=&x\vert y\end{matrix}}} Note that the action a in the above list may take the value δ (but of course, δ cannot belong to the abstraction set I). == Related formalisms == ACP has served as the basis or inspiration for several other formalisms that can be used to describe and analyze concurrent systems, including: PSF μCRL mCRL2 HyPA — a process algebra for hybrid systems == References ==
Wikipedia/Algebra_of_Communicating_Processes
In theoretical computer science, the π-calculus (or pi-calculus) is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this matter, it is able to describe concurrent computations whose network configuration may change during the computation. The π-calculus has few terms and is a small, yet expressive language (see § Syntax). Functional programs can be encoded into the π-calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the π-calculus, such as the spi calculus and applied π, have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the π-calculus has also been used to reason through business processes, molecular biology. and autonomous agents in artificial intelligence. == Informal definition == The π-calculus belongs to the family of process calculi, mathematical formalisms for describing and analyzing properties of concurrent computation. In fact, the π-calculus, like the λ-calculus, is so minimal that it does not contain primitives such as numbers, booleans, data structures, variables, functions, or even the usual control flow statements (such as if-then-else, while). === Process constructs === Central to the π-calculus is the notion of name. The simplicity of the calculus lies in the dual role that names play as communication channels and variables. The process constructs available in the calculus are the following (a precise definition is given in the following section): concurrency, written P ∣ Q {\displaystyle P\mid Q} , where P {\displaystyle P} and Q {\displaystyle Q} are two processes or threads executed concurrently. communication, where input prefixing c ( x ) . P {\displaystyle c\left(x\right).P} is a process waiting for a message that was sent on a communication channel named c {\displaystyle c} before proceeding as P {\displaystyle P} , binding the name received to the name x. Typically, this models either a process expecting a communication from the network or a label c usable only once by a goto c operation. output prefixing c ¯ ⟨ y ⟩ . P {\displaystyle {\overline {c}}\langle y\rangle .P} describes that the name y {\displaystyle y} is emitted on channel c {\displaystyle c} before proceeding as P {\displaystyle P} . Typically, this models either sending a message on the network or a goto c operation. replication, written ! P {\displaystyle !\,P} , which may be seen as a process which can always create a new copy of P {\displaystyle P} . Typically, this models either a network service or a label c waiting for any number of goto c operations. creation of a new name, written ( ν x ) P {\displaystyle \left(\nu x\right)P} , which may be seen as a process allocating a new constant x within P {\displaystyle P} . The constants of π-calculus are defined by their names only and are always communication channels. Creation of a new name in a process is also called restriction. the nil process, written 0 {\displaystyle 0} , is a process whose execution is complete and has stopped. Although the minimalism of the π-calculus prevents us from writing programs in the normal sense, it is easy to extend the calculus. In particular, it is easy to define both control structures such as recursion, loops and sequential composition and datatypes such as first-order functions, truth values, lists and integers. Moreover, extensions of the π-calculus have been proposed which take into account distribution or public-key cryptography. The applied π-calculus due to Abadi and Fournet [1] put these various extensions on a formal footing by extending the π-calculus with arbitrary datatypes. === A small example === Below is a tiny example of a process which consists of three parallel components. The channel name x is only known by the first two components. ( ν x ) ( x ¯ ⟨ z ⟩ . 0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) . 0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle {\begin{aligned}(\nu x)&\;(\;{\overline {x}}\langle z\rangle .\;0\\&\;|\;x(y).\;{\overline {y}}\langle x\rangle .\;x(y).\;0\;)\\&\;|\;z(v).\;{\overline {v}}\langle v\rangle .0\end{aligned}}} The first two components are able to communicate on the channel x, and the name y becomes bound to z. The next step in the process is therefore ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) . 0 ) | z ( v ) . v ¯ ⟨ v ⟩ . 0 {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;{\overline {z}}\langle x\rangle .\;x(y).\;0\;)\\&\;|\;z(v).\;{\overline {v}}\langle v\rangle .\;0\end{aligned}}} Note that the remaining y is not affected because it is defined in an inner scope. The second and third parallel components can now communicate on the channel name z, and the name v becomes bound to x. The next step in the process is now ( ν x ) ( 0 | x ( y ) . 0 | x ¯ ⟨ x ⟩ . 0 ) {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;x(y).\;0\\&\;|\;{\overline {x}}\langle x\rangle .\;0\;)\end{aligned}}} Note that since the local name x has been output, the scope of x is extended to cover the third component as well. Finally, the channel x can be used for sending the name x. After that all concurrently executing processes have stopped ( ν x ) ( 0 | 0 | 0 ) {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;0\\&\;|\;0\;)\end{aligned}}} == Formal definition == === Syntax === Let Χ be a set of objects called names. The abstract syntax for the π-calculus is built from the following BNF grammar (where x and y are any names from Χ): P , Q ::= x ( y ) . P Receive on channel x , bind the result to y , then run P | x ¯ ⟨ y ⟩ . P Send the value y over channel x , then run P | P | Q Run P and Q simultaneously | ( ν x ) P Create a new channel x and run P | ! P Repeatedly spawn copies of P | 0 Terminate the process {\displaystyle {\begin{aligned}P,Q::=&\;x(y).P\,\,\,\,\,&{\text{Receive on channel }}x{\text{, bind the result to }}y{\text{, then run }}P\\&\;|\;{\overline {x}}\langle y\rangle .P\,\,\,\,\,&{\text{Send the value }}y{\text{ over channel }}x{\text{, then run }}P\\&\;|\;P|Q\,\,\,\,\,\,\,\,\,&{\text{Run }}P{\text{ and }}Q{\text{ simultaneously}}\\&\;|\;(\nu x)P\,\,\,&{\text{Create a new channel }}x{\text{ and run }}P\\&\;|\;!P\,\,\,&{\text{Repeatedly spawn copies of }}P\\&\;|\;0&{\text{Terminate the process}}\end{aligned}}} In the concrete syntax below, the prefixes bind more tightly than the parallel composition (|), and parentheses are used to disambiguate. Names are bound by the restriction and input prefix constructs. Formally, the set of free names of a process in π–calculus are defined inductively by the table below. The set of bound names of a process are defined as the names of a process that are not in the set of free names. === Structural congruence === Central to both the reduction semantics and the labelled transition semantics is the notion of structural congruence. Two processes are structurally congruent, if they are identical up to structure. In particular, parallel composition is commutative and associative. More precisely, structural congruence is defined as the least equivalence relation preserved by the process constructs and satisfying: Alpha-conversion: P ≡ Q {\displaystyle P\equiv Q} if Q {\displaystyle Q} can be obtained from P {\displaystyle P} by renaming one or more bound names in P {\displaystyle P} . Axioms for parallel composition: P | Q ≡ Q | P {\displaystyle P|Q\equiv Q|P} ( P | Q ) | R ≡ P | ( Q | R ) {\displaystyle (P|Q)|R\equiv P|(Q|R)} P | 0 ≡ P {\displaystyle P|0\equiv P} Axioms for restriction: ( ν x ) ( ν y ) P ≡ ( ν y ) ( ν x ) P {\displaystyle (\nu x)(\nu y)P\equiv (\nu y)(\nu x)P} ( ν x ) 0 ≡ 0 {\displaystyle (\nu x)0\equiv 0} Axiom for replication: ! P ≡ P | ! P {\displaystyle !P\equiv P|!P} Axiom relating restriction and parallel: ( ν x ) ( P | Q ) ≡ ( ν x ) P | Q {\displaystyle (\nu x)(P|Q)\equiv (\nu x)P|Q} if x is not a free name of Q {\displaystyle Q} . This last axiom is known as the "scope extension" axiom. This axiom is central, since it describes how a bound name x may be extruded by an output action, causing the scope of x to be extended. In cases where x is a free name of Q {\displaystyle Q} , alpha-conversion may be used to allow extension to proceed. === Reduction semantics === We write P → P ′ {\displaystyle P\rightarrow P'} if P {\displaystyle P} can perform a computation step, following which it is now P ′ {\displaystyle P'} . This reduction relation → {\displaystyle \rightarrow } is defined as the least relation closed under a set of reduction rules. The main reduction rule which captures the ability of processes to communicate through channels is the following: x ¯ ⟨ z ⟩ . P | x ( y ) . Q → P | Q [ z / y ] {\displaystyle {\overline {x}}\langle z\rangle .P|x(y).Q\rightarrow P|Q[z/y]} where Q [ z / y ] {\displaystyle Q[z/y]} denotes the process Q {\displaystyle Q} in which the free name z {\displaystyle z} has been substituted for the free occurrences of y {\displaystyle y} . If a free occurrence of y {\displaystyle y} occurs in a location where z {\displaystyle z} would not be free, alpha-conversion may be required. There are three additional rules: If P → Q {\displaystyle P\rightarrow Q} then also P | R → Q | R {\displaystyle P|R\rightarrow Q|R} . This rule says that parallel composition does not inhibit computation. If P → Q {\displaystyle P\rightarrow Q} , then also ( ν x ) P → ( ν x ) Q {\displaystyle (\nu x)P\rightarrow (\nu x)Q} . This rule ensures that computation can proceed underneath a restriction. If P ≡ P ′ {\displaystyle P\equiv P'} and P ′ → Q ′ {\displaystyle P'\rightarrow Q'} and Q ′ ≡ Q {\displaystyle Q'\equiv Q} , then also P → Q {\displaystyle P\rightarrow Q} . The latter rule states that processes that are structurally congruent have the same reductions. === The example revisited === Consider again the process ( ν x ) ( x ¯ ⟨ z ⟩ .0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle (\nu x)({\overline {x}}\langle z\rangle .0|x(y).{\overline {y}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0} Applying the definition of the reduction semantics, we get the reduction ( ν x ) ( x ¯ ⟨ z ⟩ .0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 → ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle (\nu x)({\overline {x}}\langle z\rangle .0|x(y).{\overline {y}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0\rightarrow (\nu x)(0|{\overline {z}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0} Note how, applying the reduction substitution axiom, free occurrences of y {\displaystyle y} are now labeled as z {\displaystyle z} . Next, we get the reduction ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 → ( ν x ) ( 0 | x ( y ) .0 | x ¯ ⟨ x ⟩ .0 ) {\displaystyle (\nu x)(0|{\overline {z}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0\rightarrow (\nu x)(0|x(y).0|{\overline {x}}\langle x\rangle .0)} Note that since the local name x has been output, the scope of x is extended to cover the third component as well. This was captured using the scope extension axiom. Next, using the reduction substitution axiom, we get ( ν x ) ( 0 | 0 | 0 ) {\displaystyle (\nu x)(0|0|0)} Finally, using the axioms for parallel composition and restriction, we get 0 {\displaystyle 0} === Labelled semantics === Alternatively, one may give the pi-calculus a labelled transition semantics (as has been done with the Calculus of Communicating Systems). In this semantics, a transition from a state P {\displaystyle P} to some other state P ′ {\displaystyle P'} after an action α {\displaystyle \alpha } is notated as: P → α P ′ {\displaystyle P\,{\xrightarrow {\overset {}{\alpha }}}P'} Where states P {\displaystyle P} and P ′ {\displaystyle P'} represent processes and α {\displaystyle \alpha } is either an input action a ( x ) {\displaystyle a(x)} , an output action a ¯ ⟨ x ⟩ {\displaystyle {\overline {a}}\langle x\rangle } , or a silent action τ. A standard result about the labelled semantics is that it agrees with the reduction semantics up to structural congruence, in the sense that P → P ′ {\displaystyle P\rightarrow P'} if and only if P → τ ≡ P ′ {\displaystyle P\,\xrightarrow {\overset {}{\tau }} \equiv P'} == Extensions and variants == The syntax given above is a minimal one. However, the syntax may be modified in various ways. A nondeterministic choice operator P + Q {\displaystyle P+Q} can be added to the syntax. A test for name equality [ x = y ] P {\displaystyle [x=y]P} can be added to the syntax. This match operator can proceed as P {\displaystyle P} if and only if x and y {\displaystyle y} are the same name. Similarly, one may add a mismatch operator for name inequality. Practical programs which can pass names (URLs or pointers) often use such functionality: for directly modeling such functionality inside the calculus, this and related extensions are often useful. The asynchronous π-calculus allows only outputs with no continuation, i.e. output atoms of the form x ¯ ⟨ y ⟩ {\displaystyle {\overline {x}}\langle y\rangle } , yielding a smaller calculus. However, any process in the original calculus can be represented by the smaller asynchronous π-calculus using an extra channel to simulate explicit acknowledgement from the receiving process. Since a continuation-free output can model a message-in-transit, this fragment shows that the original π-calculus, which is intuitively based on synchronous communication, has an expressive asynchronous communication model inside its syntax. However, the nondeterministic choice operator defined above cannot be expressed in this way, as an unguarded choice would be converted into a guarded one; this fact has been used to demonstrate that the asynchronous calculus is strictly less expressive than the synchronous one (with the choice operator). The polyadic π-calculus allows communicating more than one name in a single action: x ¯ ⟨ z 1 , . . . , z n ⟩ . P {\displaystyle {\overline {x}}\langle z_{1},...,z_{n}\rangle .P} (polyadic output) and x ( z 1 , . . . , z n ) . P {\displaystyle x(z_{1},...,z_{n}).P} (polyadic input). This polyadic extension, which is useful especially when studying types for name passing processes, can be encoded in the monadic calculus by passing the name of a private channel through which the multiple arguments are then passed in sequence. The encoding is defined recursively by the clauses x ¯ ⟨ y 1 , ⋯ , y n ⟩ . P {\displaystyle {\overline {x}}\langle y_{1},\cdots ,y_{n}\rangle .P} is encoded as ( ν w ) x ¯ ⟨ w ⟩ . w ¯ ⟨ y 1 ⟩ . ⋯ . w ¯ ⟨ y n ⟩ . [ P ] {\displaystyle (\nu w){\overline {x}}\langle w\rangle .{\overline {w}}\langle y_{1}\rangle .\cdots .{\overline {w}}\langle y_{n}\rangle .[P]} x ( y 1 , ⋯ , y n ) . P {\displaystyle x(y_{1},\cdots ,y_{n}).P} is encoded as x ( w ) . w ( y 1 ) . ⋯ . w ( y n ) . [ P ] {\displaystyle x(w).w(y_{1}).\cdots .w(y_{n}).[P]} All other process constructs are left unchanged by the encoding. In the above, [ P ] {\displaystyle [P]} denotes the encoding of all prefixes in the continuation P {\displaystyle P} in the same way. The full power of replication ! P {\displaystyle !P} is not needed. Often, one only considers replicated input ! x ( y ) . P {\displaystyle !x(y).P} , whose structural congruence axiom is ! x ( y ) . P ≡ x ( y ) . P | ! x ( y ) . P {\displaystyle !x(y).P\equiv x(y).P|!x(y).P} . Replicated input process such as ! x ( y ) . P {\displaystyle !x(y).P} can be understood as servers, waiting on channel x to be invoked by clients. Invocation of a server spawns a new copy of the process P [ a / y ] {\displaystyle P[a/y]} , where a is the name passed by the client to the server, during the latter's invocation. A higher order π-calculus can be defined where not only names but processes are sent through channels. The key reduction rule for the higher order case is x ¯ ⟨ R ⟩ . P | x ( Y ) . Q → P | Q [ R / Y ] {\displaystyle {\overline {x}}\langle R\rangle .P|x(Y).Q\rightarrow P|Q[R/Y]} Here, Y {\displaystyle Y} denotes a process variable which can be instantiated by a process term. Sangiorgi established that the ability to pass processes does not increase the expressivity of the π-calculus: passing a process P can be simulated by just passing a name that points to P instead. == Properties == === Turing completeness === The π-calculus is a universal model of computation. This was first observed by Milner in his paper "Functions as Processes", in which he presents two encodings of the lambda-calculus in the π-calculus. One encoding simulates the eager (call-by-value) evaluation strategy, the other encoding simulates the normal-order (call-by-name) strategy. In both of these, the crucial insight is the modeling of environment bindings – for instance, "x is bound to term M {\textstyle M} " – as replicating agents that respond to requests for their bindings by sending back a connection to the term M {\displaystyle M} . The features of the π-calculus that make these encodings possible are name-passing and replication (or, equivalently, recursively defined agents). In the absence of replication/recursion, the π-calculus ceases to be Turing-complete. This can be seen by the fact that bisimulation equivalence becomes decidable for the recursion-free calculus and even for the finite-control π-calculus where the number of parallel components in any process is bounded by a constant. == Bisimulations in the π-calculus == As for process calculi, the π-calculus allows for a definition of bisimulation equivalence. In the π-calculus, the definition of bisimulation equivalence (also known as bisimilarity) may be based on either the reduction semantics or on the labelled transition semantics. There are (at least) three different ways of defining labelled bisimulation equivalence in the π-calculus: Early, late and open bisimilarity. This stems from the fact that the π-calculus is a value-passing process calculus. In the remainder of this section, we let p {\displaystyle p} and q {\displaystyle q} denote processes and R {\displaystyle R} denote binary relations over processes. === Early and late bisimilarity === Early and late bisimilarity were both formulated by Milner, Parrow and Walker in their original paper on the π-calculus. A binary relation R {\displaystyle R} over processes is an early bisimulation if for every pair of processes ( p , q ) ∈ R {\displaystyle (p,q)\in R} , whenever p → a ( x ) p ′ {\displaystyle p\,{\xrightarrow {a(x)}}\,p'} then for every name y {\displaystyle y} there exists some q ′ {\displaystyle q'} such that q → a ( x ) q ′ {\displaystyle q\,{\xrightarrow {a(x)}}\,q'} and ( p ′ [ y / x ] , q ′ [ y / x ] ) ∈ R {\displaystyle (p'[y/x],q'[y/x])\in R} ; for any non-input action α {\displaystyle \alpha } , if p → α p ′ {\displaystyle {p{\xrightarrow {\overset {}{\alpha }}}p'}} then there exists some q ′ {\displaystyle q'} such that q → α q ′ {\displaystyle q{\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} ; and symmetric requirements with p {\displaystyle p} and q {\displaystyle q} interchanged. Processes p {\displaystyle p} and q {\displaystyle q} are said to be early bisimilar, written p ∼ e q {\displaystyle p\sim _{e}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some early bisimulation R {\displaystyle R} . In late bisimilarity, the transition match must be independent of the name being transmitted. A binary relation R {\displaystyle R} over processes is a late bisimulation if for every pair of processes ( p , q ) ∈ R {\displaystyle (p,q)\in R} , whenever p → a ( x ) p ′ {\displaystyle p{\xrightarrow {a(x)}}p'} then for some q ′ {\displaystyle q'} it holds that q → a ( x ) q ′ {\displaystyle q{\xrightarrow {a(x)}}q'} and ( p ′ [ y / x ] , q ′ [ y / x ] ) ∈ R {\displaystyle (p'[y/x],q'[y/x])\in R} for every name y; for any non-input action α {\displaystyle \alpha } , if p → α p ′ {\displaystyle p{\xrightarrow {\overset {}{\alpha }}}p'} implies that there exists some q ′ {\displaystyle q'} such that q → α q ′ {\displaystyle q{\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} ; and symmetric requirements with p {\displaystyle p} and q {\displaystyle q} interchanged. Processes p {\displaystyle p} and q {\displaystyle q} are said to be late bisimilar, written p ∼ l q {\displaystyle p\sim _{l}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some late bisimulation R {\displaystyle R} . Both ∼ e {\displaystyle \sim _{e}} and ∼ l {\displaystyle \sim _{l}} suffer from the problem that they are not congruence relations in the sense that they are not preserved by all process constructs. More precisely, there exist processes p {\displaystyle p} and q {\displaystyle q} such that p ∼ e q {\displaystyle p\sim _{e}q} but a ( x ) . p ≁ e a ( x ) . q {\displaystyle a(x).p\not \sim _{e}a(x).q} . One may remedy this problem by considering the maximal congruence relations included in ∼ e {\displaystyle \sim _{e}} and ∼ l {\displaystyle \sim _{l}} , known as early congruence and late congruence, respectively. === Open bisimilarity === Fortunately, a third definition is possible, which avoids this problem, namely that of open bisimilarity, due to Sangiorgi. A binary relation R {\displaystyle R} over processes is an open bisimulation if for every pair of elements ( p , q ) ∈ R {\displaystyle (p,q)\in R} and for every name substitution σ {\displaystyle \sigma } and every action α {\displaystyle \alpha } , whenever p σ → α p ′ {\displaystyle p\sigma {\xrightarrow {\overset {}{\alpha }}}p'} then there exists some q ′ {\displaystyle q'} such that q σ → α q ′ {\displaystyle q\sigma {\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} . Processes p {\displaystyle p} and q {\displaystyle q} are said to be open bisimilar, written p ∼ o q {\displaystyle p\sim _{o}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some open bisimulation R {\displaystyle R} . ==== Early, late and open bisimilarity are distinct ==== Early, late and open bisimilarity are distinct. The containments are proper, so ∼ o ⊊ ∼ l ⊊ ∼ e {\displaystyle \sim _{o}\subsetneq \sim _{l}\subsetneq \sim _{e}} . In certain subcalculi such as the asynchronous pi-calculus, late, early and open bisimilarity are known to coincide. However, in this setting a more appropriate notion is that of asynchronous bisimilarity. In the literature, the term open bisimulation usually refers to a more sophisticated notion, where processes and relations are indexed by distinction relations; details are in Sangiorgi's paper cited above. === Barbed equivalence === Alternatively, one may define bisimulation equivalence directly from the reduction semantics. We write p ⇓ a {\displaystyle p\Downarrow a} if process p {\displaystyle p} immediately allows an input or an output on name a {\displaystyle a} . A binary relation R {\displaystyle R} over processes is a barbed bisimulation if it is a symmetric relation which satisfies that for every pair of elements ( p , q ) ∈ R {\displaystyle (p,q)\in R} we have that (1) p ⇓ a {\displaystyle p\Downarrow a} if and only if q ⇓ a {\displaystyle q\Downarrow a} for every name a {\displaystyle a} and (2) for every reduction p → p ′ {\displaystyle p\rightarrow p'} there exists a reduction q → q ′ {\displaystyle q\rightarrow q'} such that ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} . We say that p {\displaystyle p} and q {\displaystyle q} are barbed bisimilar if there exists a barbed bisimulation R {\displaystyle R} where ( p , q ) ∈ R {\displaystyle (p,q)\in R} . Defining a context as a π term with a hole [] we say that two processes P and Q are barbed congruent, written P ∼ b Q {\displaystyle P\sim _{b}Q\,\!} , if for every context C [ ] {\displaystyle C[]} we have that C [ P ] {\displaystyle C[P]} and C [ Q ] {\displaystyle C[Q]} are barbed bisimilar. It turns out that barbed congruence coincides with the congruence induced by early bisimilarity. == Applications == The π-calculus has been used to describe many different kinds of concurrent systems. In fact, some of the most recent applications lie outside the realm of traditional computer science. In 1997, Martin Abadi and Andrew Gordon proposed an extension of the π-calculus, the Spi-calculus, as a formal notation for describing and reasoning about cryptographic protocols. The spi-calculus extends the π-calculus with primitives for encryption and decryption. In 2001, Martin Abadi and Cedric Fournet generalised the handling of cryptographic protocols to produce the applied π calculus. There is now a large body of work devoted to variants of the applied π calculus, including a number of experimental verification tools. One example is the tool ProVerif [2] due to Bruno Blanchet, based on a translation of the applied π-calculus into Blanchet's logic programming framework. Another example is Cryptyc [3], due to Andrew Gordon and Alan Jeffrey, which uses Woo and Lam's method of correspondence assertions as the basis for type systems that can check for authentication properties of cryptographic protocols. Around 2002, Howard Smith and Peter Fingar became interested that π-calculus would become a description tool for modeling business processes. By July 2006, there is discussion in the community about how useful this would be. Most recently, the π-calculus has formed the theoretical basis of Business Process Modeling Language (BPML), and of Microsoft's XLANG. The π-calculus has also attracted interest in molecular biology. In 1999, Aviv Regev and Ehud Shapiro showed that one can describe a cellular signaling pathway (the so-called RTK/MAPK cascade) and in particular the molecular "lego" which implements these tasks of communication in an extension of the π-calculus. Following this seminal paper, other authors described the whole metabolic network of a minimal cell. In 2009, Anthony Nash and Sara Kalvala proposed a π-calculus framework to model the signal transduction that directs Dictyostelium discoideum aggregation. == History == The π-calculus was originally developed by Robin Milner, Joachim Parrow and David Walker in 1992, based on ideas by Uffe Engberg and Mogens Nielsen. It can be seen as a continuation of Milner's work on the process calculus CCS (Calculus of Communicating Systems). In his Turing lecture, Milner describes the development of the π-calculus as an attempt to capture the uniformity of values and processes in actors. == Implementations == The following programming languages implement the π-calculus or one of its variants: Business Process Modeling Language (BPML) occam-π Pict JoCaml (based on the Join-calculus) RhoLang == Notes == == References == Milner, Robin (1999). Communicating and Mobile Systems: The π-calculus. Cambridge, UK: Cambridge University Press. ISBN 0-521-65869-1. Milner, Robin (1993). "The Polyadic π-Calculus: A Tutorial". In F. L. Hamer; W. Brauer; H. Schwichtenberg (eds.). Logic and Algebra of Specification. Springer-Verlag. Sangiorgi, Davide; Walker, David (2001). The π-calculus: A Theory of Mobile Processes. Cambridge, UK: Cambridge University Press. ISBN 0-521-78177-9.
Wikipedia/Pi-calculus
API Calculus is a program that solves calculus problems using operating systems within a device. In 1989, the PI Calculus was created by Robin Milner and was very successful throughout the years. The PI Calculus is an extension of the process algebra CCS, a tool with algebraic languages specific to processing and formulating statements. It provides a formal theory for modeling systems and reasoning about their behaviors. In the PI Calculus, there are two specific variables: name and processes. In 2002, Shahram Rahimi decided to create an upgraded version of the PI Calculus and called it the API Calculus. Milner claimed the detailed characteristics of the API Calculus to be its "Communication Ability, Capacity for Cooperation, Capacity for Reasoning and Learning, Adaptive Behavior, and Trustworthiness." The main purpose of creating this mobile advancement is to better network and communicate with other operators while completing a task. Unfortunately, API Calculus is not perfect and has faced a problem with its security system. The language has seven features created within the device that the PI Calculus does not have. Since this program is so advanced due to the way the software was created and the different abilities offered in the program, it requires conversion into other programming languages so it can be used on various devices and other computing languages. Although API Calculus is currently being used by various other programming languages, modifications are still being made since the security on the API Calculus is causing problems for users. == What Does It Do? == The main uses of API Calculus are modeling migration, intelligence, natural grouping and security in agent-based systems. This calculus programming language is usually used in various other program languages such as Java. In Java, a famous programming language used by various corporations such as IBM, TCS, and Google, API Calculus is commonly used to solve equations and programs involving calculus. == Features == API Calculus has a wide variety of features including those similar to the PI Calculus but it also has new and improved features such as: accepts processes to be passed over communication links natural grouping of mobile processes is addressed features calculus dictionary includes milieu - a level of abstraction that is between a single mobile agents (combination of computer software and data that is able to transfer from one computer to another independently and still able to work on the most recent computer that data was transferred to) and the device as a whole. It is a very restricted environment that involves zero or many agents or other milieus that work closely together to solve computer based problems. ability of grouping together hosts (a physical node - connection point - or software program) and processes (computer program that is running) that are similar contains different programming languages knowledge units == Verification Strategy == The software language used throughout the API Calculus program is translated into two other different languages. It is first translated from API Calculus syntax to ATEL/ATL, then to MOCHA. The outcome of the translating module is like the step previously mentioned. Input Module( receives API Calculus model ) Translating Model ( converts API syntax to ATEL/ATL syntax ) Model Verification ( MOCHA ) Display Mode Transferring API syntax to ATEL/ATL requires coding transformation knowledge to successfully transfer. == Syntax == The API program has its own syntax that it follows in order to make the program run smoothly. The program is broken down into four main categories such as terms, processes, knowledge units, and milieu. The terms can be names, terms, facts, rules or functions that are assigned to variable names of the program. The process is the list of expressions used within the program to solve a calculus problem or equation. The knowledge units or commonly known as parameters are the facts and rules that can be used in order to solve the program. Lastly, the milieu is the ability to transfer computer data and information from one computer to another independently. == Flaws == The only flaw that the API Calculus has is that it doesn't have the ability to support a security system on mobile devices such as laptops. The problem is that is any outside source tries to enter the milieu is not allowed to enter because the API Calculus requires proof that is can be a trusted source. == References ==
Wikipedia/API-Calculus
The calculus of communicating systems (CCS) is a process calculus introduced by Robin Milner around 1980 and the title of a book describing the calculus. Its actions model indivisible communications between exactly two participants. The formal language includes primitives for describing parallel composition, summation between actions and scope restriction. CCS is useful for evaluating the qualitative correctness of properties of a system such as deadlock or livelock. According to Milner, "There is nothing canonical about the choice of the basic combinators, even though they were chosen with great attention to economy. What characterises our calculus is not the exact choice of combinators, but rather the choice of interpretation and of mathematical framework". The expressions of the language are interpreted as a labelled transition system. Between these models, bisimilarity is used as a semantic equivalence. == Syntax == Given a set of action names, the set of CCS processes is defined by the following BNF grammar: P ::= 0 | a . P 1 | A | P 1 + P 2 | P 1 | P 2 | P 1 [ b / a ] | P 1 ∖ a {\displaystyle P::=0\,\,\,|\,\,\,a.P_{1}\,\,\,|\,\,\,A\,\,\,|\,\,\,P_{1}+P_{2}\,\,\,|\,\,\,P_{1}|P_{2}\,\,\,|\,\,\,P_{1}[b/a]\,\,\,|\,\,\,P_{1}{\backslash }a\,\,\,} The parts of the syntax are, in the order given above inactive process the inactive process 0 {\displaystyle 0} is a valid CCS process action the process a . P 1 {\displaystyle a.P_{1}} can perform an action a {\displaystyle a} and continue as the process P 1 {\displaystyle P_{1}} process identifier write A = d e f P 1 {\displaystyle A{\overset {\underset {\mathrm {def} }{}}{=}}P_{1}} to use the identifier A {\displaystyle A} to refer to the process P 1 {\displaystyle P_{1}} (which may contain the identifier A {\displaystyle A} itself, i.e., recursive definitions are allowed) summation the process P 1 + P 2 {\displaystyle P_{1}+P_{2}} can proceed either as the process P 1 {\displaystyle P_{1}} or the process P 2 {\displaystyle P_{2}} parallel composition P 1 | P 2 {\displaystyle P_{1}|P_{2}} tells that processes P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} exist simultaneously renaming P 1 [ b / a ] {\displaystyle P_{1}[b/a]} is the process P 1 {\displaystyle P_{1}} with all actions named a {\displaystyle a} renamed as b {\displaystyle b} restriction P 1 ∖ a {\displaystyle P_{1}{\backslash }a} is the process P 1 {\displaystyle P_{1}} without action a {\displaystyle a} == Related calculi, models, and languages == Communicating sequential processes (CSP), developed by Tony Hoare, is a formal language that arose at a similar time to CCS. The Algebra of Communicating Processes (ACP) was developed by Jan Bergstra and Jan Willem Klop in 1982, and uses an axiomatic approach (in the style of Universal algebra) to reason about a similar class of processes as CCS. The pi-calculus, developed by Robin Milner, Joachim Parrow, and David Walker in the late 80's extends CCS with mobility of communication links, by allowing processes to communicate the names of communication channels themselves. PEPA, developed by Jane Hillston introduces activity timing in terms of exponentially distributed rates and probabilistic choice, allowing performance metrics to be evaluated. Reversible Communicating Concurrent Systems (RCCS) introduced by Vincent Danos, Jean Krivine, and others, introduces (partial) reversibility in the execution of CCS processes. Some other languages based on CCS: Calculus of broadcasting systems Language Of Temporal Ordering Specification (LOTOS) Process Calculus for Spatially-Explicit Ecological Models (PALPS) is an extension of CCS with probabilistic choice, locations and attributes for locations Java Orchestration Language Interpreter Engine (Jolie) Models that have been used in the study of CCS-like systems: History monoid Actor model == References == Robin Milner: A Calculus of Communicating Systems, Springer Verlag, ISBN 0-387-10235-3. 1980. Robin Milner, Communication and Concurrency, Prentice Hall, International Series in Computer Science, ISBN 0-13-115007-3. 1989
Wikipedia/Calculus_of_communicating_systems
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock. In a communications system, deadlocks occur mainly due to loss or corruption of signals rather than contention for resources. == Individually necessary and jointly sufficient conditions for deadlock == A deadlock situation on a resource can arise only if all of the following conditions occur simultaneously in a system: Mutual exclusion: multiple resources are not shareable; only one process at a time may use each resource. Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes. No preemption: a resource can be released only voluntarily by the process holding it. Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1. These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr. While these conditions are sufficient to produce a deadlock on single-instance resource systems, they only indicate the possibility of deadlock on systems having multiple instances of resources. == Deadlock handling == Most current operating systems cannot prevent deadlocks. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one. Major approaches are as follows. === Ignoring deadlock === In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm. This approach was initially used by MINIX and UNIX. This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable. Ignoring deadlocks can be safely done if deadlocks are formally proven to never occur. An example is the RTIC framework. === Detection === Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system. After a deadlock is detected, it can be corrected by using one of the following methods: Process termination: one or more processes involved in the deadlock may be aborted. One could choose to abort all competing processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, one could choose to abort one process at a time until the deadlock is resolved. This approach has a high overhead because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process. Resource preemption: resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken. === Prevention === Deadlock prevention works by preventing one of the four Coffman conditions from occurring. Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, the deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none; First, they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation. (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.) The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, the inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. If a process holding some resources and requests for some another resource(s) that cannot be immediately allocated to it, the condition may be removed by releasing all the currently being held resources of that process. The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used. === Deadlock avoidance === Similar to deadlock prevention, deadlock avoidance approach ensures that deadlock will not occur in a system. The term "deadlock avoidance" appears to be very close to "deadlock prevention" in a linguistic context, but they are very much different in the context of deadlock handling. Deadlock avoidance does not impose any conditions as seen in prevention but, here each resource request is carefully analyzed to see whether it could be safely fulfilled without causing deadlock. Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. Deadlock avoidance algorithm analyzes each and every request by examining that there is no possibility of deadlock occurrence in the future if the requested resource is allocated. The drawback of this approach is its requirement of information in advance about how resources are to be requested in the future. One of the most used deadlock avoidance algorithms is Banker's algorithm. == Livelock == A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. The term was coined by Edward A. Ashcroft in a 1975 paper in connection with an examination of airline booking systems. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing. Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action. == Distributed deadlock == Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing. Phantom deadlocks are deadlocks that are falsely detected in a distributed system due to system internal delays but do not actually exist. For example, if a process releases a resource R1 and issues a request for R2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request for R2 while having R1 would cause a deadlock). == See also == == References == == Further reading == Kaveh, Nima; Emmerich, Wolfgang. "Deadlock Detection in Distributed Object Systems" (PDF). London: University College London. {{cite journal}}: Cite journal requires |journal= (help) Bensalem, Saddek; Fernandez, Jean-Claude; Havelund, Klaus; Mounier, Laurent (2006). "Confirmation of deadlock potentials detected by runtime analysis". Proceedings of the 2006 workshop on Parallel and distributed systems: Testing and debugging. ACM. pp. 41–50. CiteSeerX 10.1.1.431.3757. doi:10.1145/1147403.1147412. ISBN 978-1595934147. S2CID 2544690. Coffman, Edward G. Jr.; Elphick, Michael J.; Shoshani, Arie (1971). "System Deadlocks" (PDF). ACM Computing Surveys. 3 (2): 67–78. doi:10.1145/356586.356588. S2CID 15975305. Mogul, Jeffrey C.; Ramakrishnan, K. K. (1997). "Eliminating receive livelock in an interrupt-driven kernel". ACM Transactions on Computer Systems. 15 (3): 217–252. CiteSeerX 10.1.1.156.667. doi:10.1145/263326.263335. ISSN 0734-2071. S2CID 215749380. Havender, James W. (1968). "Avoiding deadlock in multitasking systems". IBM Systems Journal. 7 (2): 74. doi:10.1147/sj.72.0074. Archived from the original on 24 February 2012. Retrieved 27 January 2009. Holliday, JoAnne L.; El Abbadi, Amr. "Distributed Deadlock Detection". Encyclopedia of Distributed Computing. Archived from the original on 2 November 2015. Retrieved 29 December 2004. Knapp, Edgar (1987). "Deadlock detection in distributed databases". ACM Computing Surveys. 19 (4): 303–328. CiteSeerX 10.1.1.137.6874. doi:10.1145/45075.46163. ISSN 0360-0300. S2CID 2353246. Ling, Yibei; Chen, Shigang; Chiang, Jason (2006). "On Optimal Deadlock Detection Scheduling". IEEE Transactions on Computers. 55 (9): 1178–1187. CiteSeerX 10.1.1.259.4311. doi:10.1109/tc.2006.151. S2CID 7813284. == External links == "Advanced Synchronization in Java Threads" by Scott Oaks and Henry Wong Deadlock Detection Agents DeadLock at the Portland Pattern Repository Etymology of "Deadlock"
Wikipedia/Deadlock_(computer_science)
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem. == History == The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers, following his collaboration with Per Martin-Löf and Anders Martin-Löf. The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin. == Introduction == The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. == Description == === The symbols === Given the statistical model which generates a set X {\displaystyle \mathbf {X} } of observed data, a set of unobserved latent data or missing values Z {\displaystyle \mathbf {Z} } , and a vector of unknown parameters θ {\displaystyle {\boldsymbol {\theta }}} , along with a likelihood function L ( θ ; X , Z ) = p ( X , Z ∣ θ ) {\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})} , the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data L ( θ ; X ) = p ( X ∣ θ ) = ∫ p ( X , Z ∣ θ ) d Z = ∫ p ( X ∣ Z , θ ) p ( Z ∣ θ ) d Z {\displaystyle L({\boldsymbol {\theta }};\mathbf {X} )=p(\mathbf {X} \mid {\boldsymbol {\theta }})=\int p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})\,d\mathbf {Z} =\int p(\mathbf {X} \mid \mathbf {Z} ,{\boldsymbol {\theta }})p(\mathbf {Z} \mid {\boldsymbol {\theta }})\,d\mathbf {Z} } However, this quantity is often intractable since Z {\displaystyle \mathbf {Z} } is unobserved and the distribution of Z {\displaystyle \mathbf {Z} } is unknown before attaining θ {\displaystyle {\boldsymbol {\theta }}} . === The EM algorithm === The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: Expectation step (E step): Define Q ( θ ∣ θ ( t ) ) {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} as the expected value of the log likelihood function of θ {\displaystyle {\boldsymbol {\theta }}} , with respect to the current conditional distribution of Z {\displaystyle \mathbf {Z} } given X {\displaystyle \mathbf {X} } and the current estimates of the parameters θ ( t ) {\displaystyle {\boldsymbol {\theta }}^{(t)}} : Q ( θ ∣ θ ( t ) ) = E Z ∼ p ( ⋅ | X , θ ( t ) ) ⁡ [ log ⁡ p ( X , Z | θ ) ] {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})=\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} Maximization step (M step): Find the parameters that maximize this quantity: θ ( t + 1 ) = a r g m a x θ Q ( θ ∣ θ ( t ) ) {\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\ Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\,} More succinctly, we can write it as one equation: θ ( t + 1 ) = a r g m a x θ E Z ∼ p ( ⋅ | X , θ ( t ) ) ⁡ [ log ⁡ p ( X , Z | θ ) ] {\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,} === Interpretation of the variables === The typical models to which EM is applied use Z {\displaystyle \mathbf {Z} } as a latent variable indicating membership in one of a set of groups: The observed data points X {\displaystyle \mathbf {X} } may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. The missing values (aka latent variables) Z {\displaystyle \mathbf {Z} } are discrete, drawn from a fixed number of values, and with one latent variable per observed unit. The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points whose corresponding latent variable has that value). However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parameters θ {\displaystyle {\boldsymbol {\theta }}} is known, usually the value of the latent variables Z {\displaystyle \mathbf {Z} } can be found by maximizing the log-likelihood over all possible values of Z {\displaystyle \mathbf {Z} } , either simply by iterating over Z {\displaystyle \mathbf {Z} } or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables Z {\displaystyle \mathbf {Z} } , we can find an estimate of the parameters θ {\displaystyle {\boldsymbol {\theta }}} fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both θ {\displaystyle {\boldsymbol {\theta }}} and Z {\displaystyle \mathbf {Z} } are unknown: First, initialize the parameters θ {\displaystyle {\boldsymbol {\theta }}} to some random values. Compute the probability of each possible value of Z {\displaystyle \mathbf {Z} } , given θ {\displaystyle {\boldsymbol {\theta }}} . Then, use the just-computed values of Z {\displaystyle \mathbf {Z} } to compute a better estimate for the parameters θ {\displaystyle {\boldsymbol {\theta }}} . Iterate steps 2 and 3 until convergence. The algorithm as just described monotonically approaches a local minimum of the cost function. == Properties == Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ ( t ) {\displaystyle {\boldsymbol {\theta }}^{(t)}} ), or applying simulated annealing methods. EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (proved and published by Rolf Sundberg, based on unpublished results of Per Martin-Löf and Anders Martin-Löf). The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. == Proof of correctness == Expectation-Maximization works to improve Q ( θ ∣ θ ( t ) ) {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} rather than directly improving log ⁡ p ( X ∣ θ ) {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})} . Here it is shown that improvements to the former imply improvements to the latter. For any Z {\displaystyle \mathbf {Z} } with non-zero probability p ( Z ∣ X , θ ) {\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})} , we can write log ⁡ p ( X ∣ θ ) = log ⁡ p ( X , Z ∣ θ ) − log ⁡ p ( Z ∣ X , θ ) . {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})=\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}).} We take the expectation over possible values of the unknown data Z {\displaystyle \mathbf {Z} } under the current parameter estimate θ ( t ) {\displaystyle \theta ^{(t)}} by multiplying both sides by p ( Z ∣ X , θ ( t ) ) {\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})} and summing (or integrating) over Z {\displaystyle \mathbf {Z} } . The left-hand side is the expectation of a constant, so we get: log ⁡ p ( X ∣ θ ) = ∑ Z p ( Z ∣ X , θ ( t ) ) log ⁡ p ( X , Z ∣ θ ) − ∑ Z p ( Z ∣ X , θ ( t ) ) log ⁡ p ( Z ∣ X , θ ) = Q ( θ ∣ θ ( t ) ) + H ( θ ∣ θ ( t ) ) , {\displaystyle {\begin{aligned}\log p(\mathbf {X} \mid {\boldsymbol {\theta }})&=\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})\\&=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)}),\end{aligned}}} where H ( θ ∣ θ ( t ) ) {\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} is defined by the negated sum it is replacing. This last equation holds for every value of θ {\displaystyle {\boldsymbol {\theta }}} including θ = θ ( t ) {\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}} , log ⁡ p ( X ∣ θ ( t ) ) = Q ( θ ( t ) ∣ θ ( t ) ) + H ( θ ( t ) ∣ θ ( t ) ) , {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),} and subtracting this last equation from the previous equation gives log ⁡ p ( X ∣ θ ) − log ⁡ p ( X ∣ θ ( t ) ) = Q ( θ ∣ θ ( t ) ) − Q ( θ ( t ) ∣ θ ( t ) ) + H ( θ ∣ θ ( t ) ) − H ( θ ( t ) ∣ θ ( t ) ) . {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}).} However, Gibbs' inequality tells us that H ( θ ∣ θ ( t ) ) ≥ H ( θ ( t ) ∣ θ ( t ) ) {\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})} , so we can conclude that log ⁡ p ( X ∣ θ ) − log ⁡ p ( X ∣ θ ( t ) ) ≥ Q ( θ ∣ θ ( t ) ) − Q ( θ ( t ) ∣ θ ( t ) ) . {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})\geq Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}).} In words, choosing θ {\displaystyle {\boldsymbol {\theta }}} to improve Q ( θ ∣ θ ( t ) ) {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} causes log ⁡ p ( X ∣ θ ) {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})} to improve at least as much. == As a maximization–maximization procedure == The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function: F ( q , θ ) := E q ⁡ [ log ⁡ L ( θ ; x , Z ) ] + H ( q ) , {\displaystyle F(q,\theta ):=\operatorname {E} _{q}[\log L(\theta ;x,Z)]+H(q),} where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as F ( q , θ ) = − D K L ( q ∥ p Z ∣ X ( ⋅ ∣ x ; θ ) ) + log ⁡ L ( θ ; x ) , {\displaystyle F(q,\theta )=-D_{\mathrm {KL} }{\big (}q\parallel p_{Z\mid X}(\cdot \mid x;\theta ){\big )}+\log L(\theta ;x),} where p Z ∣ X ( ⋅ ∣ x ; θ ) {\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )} is the conditional distribution of the unobserved data given the observed data x {\displaystyle x} and D K L {\displaystyle D_{KL}} is the Kullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: Expectation step: Choose q {\displaystyle q} to maximize F {\displaystyle F} : q ( t ) = a r g m a x q ⁡ F ( q , θ ( t ) ) {\displaystyle q^{(t)}=\operatorname {arg\,max} _{q}\ F(q,\theta ^{(t)})} Maximization step: Choose θ {\displaystyle \theta } to maximize F {\displaystyle F} : θ ( t + 1 ) = a r g m a x θ ⁡ F ( q ( t ) , θ ) {\displaystyle \theta ^{(t+1)}=\operatorname {arg\,max} _{\theta }\ F(q^{(t)},\theta )} == Applications == EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics. In psychometrics, EM is an important tool for estimating item parameters and latent abilities of item response theory models. With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single-photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM. In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). EM is also used for data clustering. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars. In the analysis of intertrade waiting times i.e. the time between subsequent trades in shares of stock at a stock exchange the EM algorithm has proved to be very useful. == Filtering and smoothing EM algorithms == A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: E-step Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. M-step Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates. Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation σ ^ v 2 = 1 N ∑ k = 1 N ( z k − x ^ k ) 2 , {\displaystyle {\widehat {\sigma }}_{v}^{2}={\frac {1}{N}}\sum _{k=1}^{N}{(z_{k}-{\widehat {x}}_{k})}^{2},} where x ^ k {\displaystyle {\widehat {x}}_{k}} are scalar output estimates calculated by a filter or a smoother from N scalar measurements z k {\displaystyle z_{k}} . The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by σ ^ w 2 = 1 N ∑ k = 1 N ( x ^ k + 1 − F ^ x ^ k ) 2 , {\displaystyle {\widehat {\sigma }}_{w}^{2}={\frac {1}{N}}\sum _{k=1}^{N}{({\widehat {x}}_{k+1}-{\widehat {F}}{\widehat {x}}_{k})}^{2},} where x ^ k {\displaystyle {\widehat {x}}_{k}} and x ^ k + 1 {\displaystyle {\widehat {x}}_{k+1}} are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via F ^ = ∑ k = 1 N ( x ^ k + 1 − F ^ x ^ k ) 2 ∑ k = 1 N x ^ k 2 . {\displaystyle {\widehat {F}}={\frac {\sum _{k=1}^{N}{({\widehat {x}}_{k+1}-{\widehat {F}}{\widehat {x}}_{k})}^{2}}{\sum _{k=1}^{N}{\widehat {x}}_{k}^{2}}}.} The convergence of parameter estimates such as those above are well studied. == Variants == A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm. This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization–maximization procedure section. GEM is further developed in a distributed environment and shows promising results. It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case. === α-EM algorithm === The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM. == Relation to variational Bayes methods == EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference. == Geometric interpretation == In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms. == Examples == === Gaussian mixture === Let x = ( x 1 , x 2 , … , x n ) {\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})} be a sample of n {\displaystyle n} independent observations from a mixture of two multivariate normal distributions of dimension d {\displaystyle d} , and let z = ( z 1 , z 2 , … , z n ) {\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})} be the latent variables that determine the component from which the observation originates. X i ∣ ( Z i = 1 ) ∼ N d ( μ 1 , Σ 1 ) {\displaystyle X_{i}\mid (Z_{i}=1)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{1},\Sigma _{1})} and X i ∣ ( Z i = 2 ) ∼ N d ( μ 2 , Σ 2 ) , {\displaystyle X_{i}\mid (Z_{i}=2)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{2},\Sigma _{2}),} where P ⁡ ( Z i = 1 ) = τ 1 {\displaystyle \operatorname {P} (Z_{i}=1)=\tau _{1}\,} and P ⁡ ( Z i = 2 ) = τ 2 = 1 − τ 1 . {\displaystyle \operatorname {P} (Z_{i}=2)=\tau _{2}=1-\tau _{1}.} The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each: θ = ( τ , μ 1 , μ 2 , Σ 1 , Σ 2 ) , {\displaystyle \theta ={\big (}{\boldsymbol {\tau }},{\boldsymbol {\mu }}_{1},{\boldsymbol {\mu }}_{2},\Sigma _{1},\Sigma _{2}{\big )},} where the incomplete-data likelihood function is L ( θ ; x ) = ∏ i = 1 n ∑ j = 1 2 τ j f ( x i ; μ j , Σ j ) , {\displaystyle L(\theta ;\mathbf {x} )=\prod _{i=1}^{n}\sum _{j=1}^{2}\tau _{j}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j}),} and the complete-data likelihood function is L ( θ ; x , z ) = p ( x , z ∣ θ ) = ∏ i = 1 n ∏ j = 1 2 [ f ( x i ; μ j , Σ j ) τ j ] I ( z i = j ) , {\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=p(\mathbf {x} ,\mathbf {z} \mid \theta )=\prod _{i=1}^{n}\prod _{j=1}^{2}\ [f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j})\tau _{j}]^{\mathbb {I} (z_{i}=j)},} or L ( θ ; x , z ) = exp ⁡ { ∑ i = 1 n ∑ j = 1 2 I ( z i = j ) [ log ⁡ τ j − 1 2 log ⁡ | Σ j | − 1 2 ( x i − μ j ) ⊤ Σ j − 1 ( x i − μ j ) − d 2 log ⁡ ( 2 π ) ] } , {\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=\exp \left\{\sum _{i=1}^{n}\sum _{j=1}^{2}\mathbb {I} (z_{i}=j){\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}\right\},} where I {\displaystyle \mathbb {I} } is an indicator function and f {\displaystyle f} is the probability density function of a multivariate normal. In the last equality, for each i, one indicator I ( z i = j ) {\displaystyle \mathbb {I} (z_{i}=j)} is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term. ==== E step ==== Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes' theorem to be the proportional height of the normal density weighted by τ: T j , i ( t ) := P ⁡ ( Z i = j ∣ X i = x i ; θ ( t ) ) = τ j ( t ) f ( x i ; μ j ( t ) , Σ j ( t ) ) τ 1 ( t ) f ( x i ; μ 1 ( t ) , Σ 1 ( t ) ) + τ 2 ( t ) f ( x i ; μ 2 ( t ) , Σ 2 ( t ) ) . {\displaystyle T_{j,i}^{(t)}:=\operatorname {P} (Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})={\frac {\tau _{j}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j}^{(t)},\Sigma _{j}^{(t)})}{\tau _{1}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{1}^{(t)},\Sigma _{1}^{(t)})+\tau _{2}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{2}^{(t)},\Sigma _{2}^{(t)})}}.} These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: Q ( θ ∣ θ ( t ) ) = E Z ∣ X = x ; θ ( t ) ⁡ [ log ⁡ L ( θ ; x , Z ) ] = E Z ∣ X = x ; θ ( t ) ⁡ [ log ⁡ ∏ i = 1 n L ( θ ; x i , Z i ) ] = E Z ∣ X = x ; θ ( t ) ⁡ [ ∑ i = 1 n log ⁡ L ( θ ; x i , Z i ) ] = ∑ i = 1 n E Z i ∣ X i = x i ; θ ( t ) ⁡ [ log ⁡ L ( θ ; x i , Z i ) ] = ∑ i = 1 n ∑ j = 1 2 P ( Z i = j ∣ X i = x i ; θ ( t ) ) log ⁡ L ( θ j ; x i , j ) = ∑ i = 1 n ∑ j = 1 2 T j , i ( t ) [ log ⁡ τ j − 1 2 log ⁡ | Σ j | − 1 2 ( x i − μ j ) ⊤ Σ j − 1 ( x i − μ j ) − d 2 log ⁡ ( 2 π ) ] . {\displaystyle {\begin{aligned}Q(\theta \mid \theta ^{(t)})&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} ,\mathbf {Z} )]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\log \prod _{i=1}^{n}L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\sum _{i=1}^{n}\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\operatorname {E} _{Z_{i}\mid X_{i}=x_{i};\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}P(Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})\log L(\theta _{j};\mathbf {x} _{i},j)\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}T_{j,i}^{(t)}{\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}.\end{aligned}}} The expectation of log ⁡ L ( θ ; x i , Z i ) {\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})} inside the sum is taken with respect to the probability density function P ( Z i ∣ X i = x i ; θ ( t ) ) {\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})} , which might be different for each x i {\displaystyle \mathbf {x} _{i}} of the training set. Everything in the E step is known before the step is taken except T j , i {\displaystyle T_{j,i}} , which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently. ==== M step ==== Q ( θ ∣ θ ( t ) ) {\displaystyle Q(\theta \mid \theta ^{(t)})} being quadratic in form means that determining the maximizing values of θ {\displaystyle \theta } is relatively straightforward. Also, τ {\displaystyle \tau } , ( μ 1 , Σ 1 ) {\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})} and ( μ 2 , Σ 2 ) {\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})} may all be maximized independently since they all appear in separate linear terms. To begin, consider τ {\displaystyle \tau } , which has the constraint τ 1 + τ 2 = 1 {\displaystyle \tau _{1}+\tau _{2}=1} : τ ( t + 1 ) = a r g m a x τ Q ( θ ∣ θ ( t ) ) = a r g m a x τ { [ ∑ i = 1 n T 1 , i ( t ) ] log ⁡ τ 1 + [ ∑ i = 1 n T 2 , i ( t ) ] log ⁡ τ 2 } . {\displaystyle {\begin{aligned}{\boldsymbol {\tau }}^{(t+1)}&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ \left\{\left[\sum _{i=1}^{n}T_{1,i}^{(t)}\right]\log \tau _{1}+\left[\sum _{i=1}^{n}T_{2,i}^{(t)}\right]\log \tau _{2}\right\}.\end{aligned}}} This has the same form as the maximum likelihood estimate for the binomial distribution, so τ j ( t + 1 ) = ∑ i = 1 n T j , i ( t ) ∑ i = 1 n ( T 1 , i ( t ) + T 2 , i ( t ) ) = 1 n ∑ i = 1 n T j , i ( t ) . {\displaystyle \tau _{j}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{j,i}^{(t)}}{\sum _{i=1}^{n}(T_{1,i}^{(t)}+T_{2,i}^{(t)})}}={\frac {1}{n}}\sum _{i=1}^{n}T_{j,i}^{(t)}.} For the next estimates of ( μ 1 , Σ 1 ) {\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})} : ( μ 1 ( t + 1 ) , Σ 1 ( t + 1 ) ) = a r g m a x μ 1 , Σ 1 Q ( θ ∣ θ ( t ) ) = a r g m a x μ 1 , Σ 1 ∑ i = 1 n T 1 , i ( t ) { − 1 2 log ⁡ | Σ 1 | − 1 2 ( x i − μ 1 ) ⊤ Σ 1 − 1 ( x i − μ 1 ) } . {\displaystyle {\begin{aligned}({\boldsymbol {\mu }}_{1}^{(t+1)},\Sigma _{1}^{(t+1)})&={\underset {{\boldsymbol {\mu }}_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {{\boldsymbol {\mu }}_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\ \sum _{i=1}^{n}T_{1,i}^{(t)}\left\{-{\tfrac {1}{2}}\log |\Sigma _{1}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})^{\top }\Sigma _{1}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})\right\}\end{aligned}}.} This has the same form as a weighted maximum likelihood estimate for a normal distribution, so μ 1 ( t + 1 ) = ∑ i = 1 n T 1 , i ( t ) x i ∑ i = 1 n T 1 , i ( t ) {\displaystyle {\boldsymbol {\mu }}_{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}} and Σ 1 ( t + 1 ) = ∑ i = 1 n T 1 , i ( t ) ( x i − μ 1 ( t + 1 ) ) ( x i − μ 1 ( t + 1 ) ) ⊤ ∑ i = 1 n T 1 , i ( t ) {\displaystyle \Sigma _{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}} and, by symmetry, μ 2 ( t + 1 ) = ∑ i = 1 n T 2 , i ( t ) x i ∑ i = 1 n T 2 , i ( t ) {\displaystyle {\boldsymbol {\mu }}_{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}} and Σ 2 ( t + 1 ) = ∑ i = 1 n T 2 , i ( t ) ( x i − μ 2 ( t + 1 ) ) ( x i − μ 2 ( t + 1 ) ) ⊤ ∑ i = 1 n T 2 , i ( t ) . {\displaystyle \Sigma _{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}.} ==== Termination ==== Conclude the iterative process if E Z ∣ θ ( t ) , x [ log ⁡ L ( θ ( t ) ; x , Z ) ] ≤ E Z ∣ θ ( t − 1 ) , x [ log ⁡ L ( θ ( t − 1 ) ; x , Z ) ] + ε {\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon } for ε {\displaystyle \varepsilon } below some preset threshold. ==== Generalization ==== The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions. === Truncated and censored regression === The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one normal distribution. == Alternatives == EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions. == See also == mixture distribution compound distribution density estimation Principal component analysis total absorption spectroscopy The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm. == References == == Further reading == Hogg, Robert; McKean, Joseph; Craig, Allen (2005). Introduction to Mathematical Statistics. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 359–364. Dellaert, Frank (February 2002). The Expectation Maximization Algorithm (PDF) (Technical Report number GIT-GVU-02-20). Georgia Tech College of Computing. gives an easier explanation of EM algorithm as to lowerbound maximization. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2. Gupta, M. R.; Chen, Y. (2010). "Theory and Use of the EM Algorithm". Foundations and Trends in Signal Processing. 4 (3): 223–296. CiteSeerX 10.1.1.219.6830. doi:10.1561/2000000034. A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet. Bilmes, Jeff (1997). A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models (Technical Report TR-97-021). International Computer Science Institute. includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models. McLachlan, Geoffrey J.; Krishnan, Thriyambakam (2008). The EM Algorithm and Extensions (2nd ed.). Hoboken: Wiley. ISBN 978-0-471-20170-0. == External links == Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings. Class hierarchy in C++ (GPL) including Gaussian Mixtures The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the EM algorithm such as clustering using the soft k-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition). Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs (chapters). The Expectation Maximization Algorithm: A short tutorial, A self-contained derivation of the EM Algorithm by Sean Borman. The EM Algorithm, by Xiaojin Zhu. EM algorithm and variants: an informal tutorial by Alexis Roche. A concise and very clear description of EM and many interesting variants.
Wikipedia/Expectation-maximization_algorithm
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. == Example: Helix == A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation: r ( t ) = ⟨ f ( t ) , g ( t ) , h ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle } The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function ⟨ 2 cos ⁡ t , 4 sin ⁡ t , t ⟩ {\displaystyle \langle 2\cos t,\,4\sin t,\,t\rangle } near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π. In 2D, we can analogously speak about vector-valued functions as: r ( t ) = f ( t ) i + g ( t ) j {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} } or r ( t ) = ⟨ f ( t ) , g ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t)\rangle } == Linear case == In the linear case the function can be expressed in terms of matrices: y = A x , {\displaystyle \mathbf {y} =A\mathbf {x} ,} where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form y = A x + b , {\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} ,} where in addition b'' is an n × 1 vector of parameters. The linear case arises often, for example in multiple regression, where for instance the n × 1 vector y ^ {\displaystyle {\hat {y}}} of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector β ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} (k < n) of estimated values of model parameters: y ^ = X β ^ , {\displaystyle {\hat {\mathbf {y} }}=X{\hat {\boldsymbol {\beta }}},} in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers. == Parametric representation of a surface == A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface: ( x , y , z ) = ( f ( s , t ) , g ( s , t ) , h ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv \mathbf {F} (s,t).} Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation ( x 1 , x 2 , … , x n ) = ( f 1 ( s , t ) , f 2 ( s , t ) , … , f n ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x_{1},x_{2},\dots ,x_{n})=(f_{1}(s,t),f_{2}(s,t),\dots ,f_{n}(s,t))\equiv \mathbf {F} (s,t).} == Derivative of a three-dimensional vector function == Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } is a vector-valued function, then d r d t = f ′ ( t ) i + g ′ ( t ) j + h ′ ( t ) k . {\displaystyle {\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .} The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle v ( t ) = d r d t . {\displaystyle \mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.} Likewise, the derivative of the velocity is the acceleration d v d t = a ( t ) . {\displaystyle {\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).} === Partial derivative === The partial derivative of a vector function a with respect to a scalar variable q is defined as ∂ a ∂ q = ∑ i = 1 n ∂ a i ∂ q e i {\displaystyle {\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}} where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken. === Ordinary derivative === If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t, d a d t = ∑ i = 1 n d a i d t e i . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.} === Total derivative === If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as d a d t = ∑ r = 1 n ∂ a ∂ q r d q r d t + ∂ a ∂ t . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.} Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr. === Reference frames === Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. === Derivative of a vector function with nonfixed bases === The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is N d a d t = ∑ i = 1 3 d a i d t e i + ∑ i = 1 3 a i N d e i d t {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}} where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is N d a d t = E d a d t + N ω E × a {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} } where NωE is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula N d d t ( r R ) = E d d t ( r R ) + N ω E × r R . {\displaystyle {\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.} where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution, N v R = E v R + N ω E × r R {\displaystyle {}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }} where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. === Derivative and vector multiplication === The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q, ∂ ∂ q ( p a ) = ∂ p ∂ q a + p ∂ a ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.} In the case of dot multiplication, for two vectors a and b that are both functions of q, ∂ ∂ q ( a ⋅ b ) = ∂ a ∂ q ⋅ b + a ⋅ ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.} Similarly, the derivative of the cross product of two vector functions is ∂ ∂ q ( a × b ) = ∂ a ∂ q × b + a × ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.} === Derivative of an n-dimensional vector function === A function f of a real number t with values in the space R n {\displaystyle \mathbb {R} ^{n}} can be written as f ( t ) = ( f 1 ( t ) , f 2 ( t ) , … , f n ( t ) ) {\displaystyle \mathbf {f} (t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))} . Its derivative equals f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , … , f n ′ ( t ) ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t)).} If f is a function of several variables, say of t ∈ R m {\displaystyle t\in \mathbb {R} ^{m}} , then the partial derivatives of the components of f form a n × m {\displaystyle n\times m} matrix called the Jacobian matrix of f. == Infinite-dimensional vector functions == If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function. === Functions with values in a Hilbert space === If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle \mathbf {f} '(t)=\lim _{h\to 0}{\frac {\mathbf {f} (t+h)-\mathbf {f} (t)}{h}}.} Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., t ∈ R n {\displaystyle t\in \mathbb {R} ^{n}} or even t ∈ Y {\displaystyle t\in Y} , where Y is an infinite-dimensional vector space). N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if f = ( f 1 , f 2 , f 3 , … ) {\displaystyle \mathbf {f} =(f_{1},f_{2},f_{3},\ldots )} (i.e., f = f 1 e 1 + f 2 e 2 + f 3 e 3 + ⋯ {\displaystyle \mathbf {f} =f_{1}\mathbf {e} _{1}+f_{2}\mathbf {e} _{2}+f_{3}\mathbf {e} _{3}+\cdots } , where e 1 , e 2 , e 3 , … {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3},\ldots } is an orthonormal basis of the space X ), and f ′ ( t ) {\displaystyle f'(t)} exists, then f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , f 3 ′ ( t ) , … ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).} However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. === Other infinite-dimensional vector spaces === Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. == Vector field == == See also == Coordinate vector Curve Multivalued function Parametric surface Position vector Parametrization == Notes == == References == == External links == Vector-valued functions and their properties (from Lake Tahoe Community College) Weisstein, Eric W. "Vector Function". MathWorld. Everything2 article 3 Dimensional vector-valued functions (from East Tennessee State University) "Position Vector Valued Functions" Khan Academy module
Wikipedia/Vector_function
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / BC, AB / BC Calc or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). == AP Calculus AB == AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular or honors calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. === Purpose === According to the College Board:An AP course in calculus consists of a full high school academic year of work that is comparable to calculus courses in colleges and universities. It is expected that students who take an AP course in calculus will seek college credit, college placement, or both, from institutions of higher learning. The AP Program includes specifications for two calculus courses and the exam for each course. The two courses and the two corresponding exams are designated as Calculus AB and Calculus BC. Calculus AB can be offered as an AP course by any school that can organize a curriculum for students with advanced mathematical ability. === Topic outline === The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations == AP Calculus BC == AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). === Purpose === According to the College Board, Calculus BC is a full-year course in the calculus of functions of a single variable. It includes all topics covered in Calculus AB plus additional topics... Students who take an AP Calculus course should do so with the intention of placing out of a comparable college calculus course. === Topic outline === AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (including arc length in polar coordinates and calculating area) Arc length calculations using integration Integration by parts Improper integrals Differential equations for logistic growth Using partial fractions to integrate rational functions It can be seen from the tables that the pass rate (score of 3 or higher) of AP Calculus BC is higher than AP Calculus AB. It can also be noted that about 1/3 as many take the BC exam as take the AB exam. A possible explanation for the higher scores on BC is that students who take AP Calculus BC are more prepared and advanced in math. The 5-rate is consistently over 40% (much higher than almost all the other AP exams). ==== AB sub-score distribution ==== == AP Exam == The College Board intentionally schedules the AP Calculus AB exam at the same time as the AP Calculus BC exam to make it impossible for a student to take both tests in the same academic year, though the College Board does not make Calculus AB a prerequisite class for Calculus BC. Some schools do this, though many others only require precalculus as a prerequisite for Calculus BC. The AP awards given by College Board count both exams. However, they do not count the AB sub-score piece of the BC exam. === Format === The structures of the AB and BC exams are identical. Both exams are three hours and fifteen minutes long, comprising a total of 45 multiple choice questions and six free response questions. They are usually administered on a Monday or Tuesday morning in May. The two parts of the multiple choice section are timed and taken independently. Students are required to put away their calculators after 30 minutes have passed during the Free-Response section, and only at that point may begin Section II Part B. However, students may continue to work on Section II Part A during the entire Free-Response time, although without a calculator during the later two thirds. === Scoring === The multiple choice section is scored by computer, with a correct answer receiving 1 point, with omitted and incorrect answers not affecting the raw score. This total is multiplied by 1.2 to calculate the adjusted multiple-choice score. The free response section is hand-graded by hundreds of AP teachers and professors each June. The raw score is then added to the adjusted multiple choice score to receive a composite score. This total is compared to a composite-score scale for that year's exam and converted into an AP score of 1 to 5. For the Calculus BC exam, an AB sub-score is included in the score report to reflect their proficiency in the fundamental topics of introductory calculus. The AB sub-score is based on the correct number of answers for questions pertaining to AB-material only. == See also == AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism AP Precalculus Glossary of calculus Mathematics education in the United States Stand and Deliver (1988 film) == References == == External links == AP Calculus AB College Board description of the AP Calculus AB course content College Board description of the AP Calculus AB examination AP Calculus BC College Board description of the AP Calculus BC course content College Board description of the AP Calculus BC examination == Further reading == Nahin, Paul (2014). Inside Interesting Integrals. Springer. ISBN 9781493912766.
Wikipedia/AP_Calculus
Advanced Placement (AP) Precalculus (also known as AP Precalc) is an Advanced Placement precalculus course and examination, offered by the College Board, in development since 2021 and announced in May 2022. The course debuted in the fall of 2023, with the first exam session taking place in May 2024. The course and examination are designed to teach and assess precalculus concepts, as a foundation for a wide variety of STEM fields and careers, and are not solely designed as preparation for future mathematics courses such as AP Calculus AB/BC. == Purpose == According to the College Board, Offering a college-level precalculus course in high school will give students a new and valuable option for improving math readiness and staying on track for college. AP Precalculus centers on functions modeling dynamic phenomena. This research-based exploration of functions is designed to better prepare students for college-level calculus and provide grounding for other mathematics and science courses. In this course, students study a broad spectrum of function types that are foundational for careers in mathematics, physics, biology, health science, social science, and data science. Furthermore, as AP Precalculus may be the last mathematics course of a student's secondary education, the course is structured to provide a coherent capstone experience and is not exclusively focused on preparation for future courses. == Topic outline == === Unit 1: Polynomial and Rational Functions (6–6.5 weeks) === === Unit 2: Exponential and Logarithmic Functions (6–6.5 weeks) === === Unit 3: Trigonometric and Polar Functions (7–7.5 weeks) === === Unit 4: Functions Involving Parameters, Vectors, and Matrices (7–7.5 weeks) === Note that Unit 4 will not be tested on the AP Precalculus exam. == Exam == The exam is composed of 2 sections, each with 2 different types of questions. Section I consists of 40 multiple choice questions. 28 do not allow the use of a calculator, while the last 12 do allow a calculator. The non-calculator section is worth 43.75% of the exam score, while the calculator section is worth 18.75%. Section II of the Exam includes 4 free response questions, with 2 not allowing a calculator and 2 allowing use of a calculator. Section II is worth 37.5% of the exam score, with the non-calculator and calculator sections weighed equally. AP Precalculus exams will be scored on the standard 1–5 AP scale, with 5 signifying that the student is "extremely well qualified" for equivalent college credit and 1 signifying "no recommendation." The 2025 AP Precalculus exam is set to take place on Tuesday, May 13, 2025 at 8AM local time. === Score distribution === == See also == Mathematics education in the United States == References ==
Wikipedia/AP_Precalculus
Pre-algebra is a common name for a course taught in middle school mathematics in the United States, usually taught in the 6th, 7th, 8th, or 9th grade. The main objective of it is to prepare students for the study of algebra. Usually, Algebra I is taught in the 8th or 9th grade. As an intermediate stage after arithmetic, pre-algebra helps students pass specific conceptual barriers. Students are introduced to the idea that an equals sign, rather than just being the answer to a question as in basic arithmetic, means that two sides are equivalent and can be manipulated together. They may also learn how numbers, variables, and words can be used in the same ways. == Subjects == Subjects taught in a pre-algebra course may include: Review of natural number arithmetic Types of numbers such as integers, fractions, decimals and negative numbers Ratios and percents Factorization of natural numbers Properties of operations such as associativity and distributivity Simple (integer) roots and powers Rules of evaluation of expressions, such as operator precedence and use of parentheses Basics of equations, including rules for invariant manipulation of equations Understanding of variable manipulation Manipulation and plotting in the standard 4-quadrant Cartesian coordinate plane Powers in scientific notation (example: 340,000,000 in scientific notation is 3.4 × 108) Identifying Probability Solving Square roots Pythagorean Theorem Pre-algebra may include subjects from geometry, especially to further the understanding of algebra in applications to area and volume. Pre-algebra may also include subjects from statistics to identify probability and interpret data. Proficiency in pre-algebra is an indicator of college success. It can also be taught as a remedial course for college students. == See also == Precalculus Mathematics education in the United States Pre-algebra Tests == References == Szczepanski, Amy F.; Kositsky, Andrew P. (2008), The Complete Idiot's Guide to Pre-algebra, Penguin, ISBN 9781592577729
Wikipedia/Pre-algebra
In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the n-th power, where n need not be an integer — thus, it can transform a function to any intermediate domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition. The FRFT can be used to define fractional convolution, correlation, and other operations, and can also be further generalized into the linear canonical transformation (LCT). An early definition of the FRFT was introduced by Condon, by solving for the Green's function for phase-space rotations, and also by Namias, generalizing work of Wiener on Hermite polynomials. However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups. Since then, there has been a surge of interest in extending Shannon's sampling theorem for signals which are band-limited in the Fractional Fourier domain. A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber as essentially another name for a z-transform, and in particular for the case that corresponds to a discrete Fourier transform shifted by a fractional amount in frequency space (multiplying the input by a linear chirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently by Bluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT. == Introduction == The continuous Fourier transform F {\displaystyle {\mathcal {F}}} of a function f : R ↦ C {\displaystyle f:\mathbb {R} \mapsto \mathbb {C} } is a unitary operator of L 2 {\displaystyle L^{2}} space that maps the function f {\displaystyle f} to its frequential version f ^ {\displaystyle {\hat {f}}} (all expressions are taken in the L 2 {\displaystyle L^{2}} sense, rather than pointwise): f ^ ( ξ ) = ∫ − ∞ ∞ f ( x ) e − 2 π i x ξ d x {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-2\pi ix\xi }\,\mathrm {d} x} and f {\displaystyle f} is determined by f ^ {\displaystyle {\hat {f}}} via the inverse transform F − 1 , {\displaystyle {\mathcal {F}}^{-1}\,,} f ( x ) = ∫ − ∞ ∞ f ^ ( ξ ) e 2 π i ξ x d ξ . {\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )\ e^{2\pi i\xi x}\,\mathrm {d} \xi \,.} Let us study its n-th iterated F n {\displaystyle {\mathcal {F}}^{n}} defined by F n [ f ] = F [ F n − 1 [ f ] ] {\displaystyle {\mathcal {F}}^{n}[f]={\mathcal {F}}[{\mathcal {F}}^{n-1}[f]]} and F − n = ( F − 1 ) n {\displaystyle {\mathcal {F}}^{-n}=({\mathcal {F}}^{-1})^{n}} when n is a non-negative integer, and F 0 [ f ] = f {\displaystyle {\mathcal {F}}^{0}[f]=f} . Their sequence is finite since F {\displaystyle {\mathcal {F}}} is a 4-periodic automorphism: for every function f {\displaystyle f} , F 4 [ f ] = f {\displaystyle {\mathcal {F}}^{4}[f]=f} . More precisely, let us introduce the parity operator P {\displaystyle {\mathcal {P}}} that inverts x {\displaystyle x} , P [ f ] : x ↦ f ( − x ) {\displaystyle {\mathcal {P}}[f]\colon x\mapsto f(-x)} . Then the following properties hold: F 0 = I d , F 1 = F , F 2 = P , F 4 = I d {\displaystyle {\mathcal {F}}^{0}=\mathrm {Id} ,\qquad {\mathcal {F}}^{1}={\mathcal {F}},\qquad {\mathcal {F}}^{2}={\mathcal {P}},\qquad {\mathcal {F}}^{4}=\mathrm {Id} } F 3 = F − 1 = P ∘ F = F ∘ P . {\displaystyle {\mathcal {F}}^{3}={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}}.} The FRFT provides a family of linear transforms that further extends this definition to handle non-integer powers n = 2 α / π {\displaystyle n=2\alpha /\pi } of the FT. == Definition == Note: some authors write the transform in terms of the "order a" instead of the "angle α", in which case the α is usually a times π/2. Although these two forms are equivalent, one must be careful about which definition the author uses. For any real α, the α-angle fractional Fourier transform of a function ƒ is denoted by F α ( u ) {\displaystyle {\mathcal {F}}_{\alpha }(u)} and defined by: For α = π/2, this becomes precisely the definition of the continuous Fourier transform, and for α = −π/2 it is the definition of the inverse continuous Fourier transform. The FRFT argument u is neither a spatial one x nor a frequency ξ. We will see why it can be interpreted as linear combination of both coordinates (x,ξ). When we want to distinguish the α-angular fractional domain, we will let x a {\displaystyle x_{a}} denote the argument of F α {\displaystyle {\mathcal {F}}_{\alpha }} . Remark: with the angular frequency ω convention instead of the frequency one, the FRFT formula is the Mehler kernel, F α ( f ) ( ω ) = 1 − i cot ⁡ ( α ) 2 π e i cot ⁡ ( α ) ω 2 / 2 ∫ − ∞ ∞ e − i csc ⁡ ( α ) ω t + i cot ⁡ ( α ) t 2 / 2 f ( t ) d t . {\displaystyle {\mathcal {F}}_{\alpha }(f)(\omega )={\sqrt {\frac {1-i\cot(\alpha )}{2\pi }}}e^{i\cot(\alpha )\omega ^{2}/2}\int _{-\infty }^{\infty }e^{-i\csc(\alpha )\omega t+i\cot(\alpha )t^{2}/2}f(t)\,dt~.} === Properties === The α-th order fractional Fourier transform operator, F α {\displaystyle {\mathcal {F}}_{\alpha }} , has the properties: ==== Additivity ==== For any real angles α, β, F α + β = F α ∘ F β = F β ∘ F α . {\displaystyle {\mathcal {F}}_{\alpha +\beta }={\mathcal {F}}_{\alpha }\circ {\mathcal {F}}_{\beta }={\mathcal {F}}_{\beta }\circ {\mathcal {F}}_{\alpha }.} ==== Linearity ==== F α [ ∑ k b k f k ( u ) ] = ∑ k b k F α [ f k ( u ) ] {\displaystyle {\mathcal {F}}_{\alpha }\left[\sum \nolimits _{k}b_{k}f_{k}(u)\right]=\sum \nolimits _{k}b_{k}{\mathcal {F}}_{\alpha }\left[f_{k}(u)\right]} ==== Integer Orders ==== If α is an integer multiple of π / 2 {\displaystyle \pi /2} , then: F α = F k π / 2 = F k = ( F ) k {\displaystyle {\mathcal {F}}_{\alpha }={\mathcal {F}}_{k\pi /2}={\mathcal {F}}^{k}=({\mathcal {F}})^{k}} Moreover, it has following relation F 2 = P P [ f ( u ) ] = f ( − u ) F 3 = F − 1 = ( F ) − 1 F 4 = F 0 = I F i = F j i ≡ j mod 4 {\displaystyle {\begin{aligned}{\mathcal {F}}^{2}&={\mathcal {P}}&&{\mathcal {P}}[f(u)]=f(-u)\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}=({\mathcal {F}})^{-1}\\{\mathcal {F}}^{4}&={\mathcal {F}}^{0}={\mathcal {I}}\\{\mathcal {F}}^{i}&={\mathcal {F}}^{j}&&i\equiv j\mod 4\end{aligned}}} ==== Inverse ==== ( F α ) − 1 = F − α {\displaystyle ({\mathcal {F}}_{\alpha })^{-1}={\mathcal {F}}_{-\alpha }} ==== Commutativity ==== F α 1 F α 2 = F α 2 F α 1 {\displaystyle {\mathcal {F}}_{\alpha _{1}}{\mathcal {F}}_{\alpha _{2}}={\mathcal {F}}_{\alpha _{2}}{\mathcal {F}}_{\alpha _{1}}} ==== Associativity ==== ( F α 1 F α 2 ) F α 3 = F α 1 ( F α 2 F α 3 ) {\displaystyle \left({\mathcal {F}}_{\alpha _{1}}{\mathcal {F}}_{\alpha _{2}}\right){\mathcal {F}}_{\alpha _{3}}={\mathcal {F}}_{\alpha _{1}}\left({\mathcal {F}}_{\alpha _{2}}{\mathcal {F}}_{\alpha _{3}}\right)} ==== Unitarity ==== ∫ f ( t ) g ∗ ( t ) d t = ∫ f α ( u ) g α ∗ ( u ) d u {\displaystyle \int f(t)g^{*}(t)dt=\int f_{\alpha }(u)g_{\alpha }^{*}(u)du} ==== Time Reversal ==== F α P = P F α {\displaystyle {\mathcal {F}}_{\alpha }{\mathcal {P}}={\mathcal {P}}{\mathcal {F}}_{\alpha }} F α [ f ( − u ) ] = f α ( − u ) {\displaystyle {\mathcal {F}}_{\alpha }[f(-u)]=f_{\alpha }(-u)} ==== Transform of a shifted function ==== Define the shift and the phase shift operators as follows: S H ( u 0 ) [ f ( u ) ] = f ( u + u 0 ) P H ( v 0 ) [ f ( u ) ] = e j 2 π v 0 u f ( u ) {\displaystyle {\begin{aligned}{\mathcal {SH}}(u_{0})[f(u)]&=f(u+u_{0})\\{\mathcal {PH}}(v_{0})[f(u)]&=e^{j2\pi v_{0}u}f(u)\end{aligned}}} Then F α S H ( u 0 ) = e j π u 0 2 sin ⁡ α cos ⁡ α P H ( u 0 sin ⁡ α ) S H ( u 0 cos ⁡ α ) F α , {\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }{\mathcal {SH}}(u_{0})&=e^{j\pi u_{0}^{2}\sin \alpha \cos \alpha }{\mathcal {PH}}(u_{0}\sin \alpha ){\mathcal {SH}}(u_{0}\cos \alpha ){\mathcal {F}}_{\alpha },\end{aligned}}} that is, F α [ f ( u + u 0 ) ] = e j π u 0 2 sin ⁡ α cos ⁡ α e j 2 π u u 0 sin ⁡ α f α ( u + u 0 cos ⁡ α ) {\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }[f(u+u_{0})]&=e^{j\pi u_{0}^{2}\sin \alpha \cos \alpha }e^{j2\pi uu_{0}\sin \alpha }f_{\alpha }(u+u_{0}\cos \alpha )\end{aligned}}} ==== Transform of a scaled function ==== Define the scaling and chirp multiplication operators as follows: M ( M ) [ f ( u ) ] = | M | − 1 2 f ( u M ) Q ( q ) [ f ( u ) ] = e − j π q u 2 f ( u ) {\displaystyle {\begin{aligned}M(M)[f(u)]&=|M|^{-{\frac {1}{2}}}f\left({\tfrac {u}{M}}\right)\\Q(q)[f(u)]&=e^{-j\pi qu^{2}}f(u)\end{aligned}}} Then, F α M ( M ) = Q ( − cot ⁡ ( 1 − cos 2 ⁡ α ′ cos 2 ⁡ α α ) ) × M ( sin ⁡ α M sin ⁡ α ′ ) F α ′ F α [ | M | − 1 2 f ( u M ) ] = 1 − j cot ⁡ α 1 − j M 2 cot ⁡ α e j π u 2 cot ⁡ ( 1 − cos 2 ⁡ α ′ cos 2 ⁡ α α ) × f a ( M u sin ⁡ α ′ sin ⁡ α ) {\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }M(M)&=Q\left(-\cot \left({\frac {1-\cos ^{2}\alpha '}{\cos ^{2}\alpha }}\alpha \right)\right)\times M\left({\frac {\sin \alpha }{M\sin \alpha '}}\right){\mathcal {F}}_{\alpha '}\\[6pt]{\mathcal {F}}_{\alpha }\left[|M|^{-{\frac {1}{2}}}f\left({\tfrac {u}{M}}\right)\right]&={\sqrt {\frac {1-j\cot \alpha }{1-jM^{2}\cot \alpha }}}e^{j\pi u^{2}\cot \left({\frac {1-\cos ^{2}\alpha '}{\cos ^{2}\alpha }}\alpha \right)}\times f_{a}\left({\frac {Mu\sin \alpha '}{\sin \alpha }}\right)\end{aligned}}} Notice that the fractional Fourier transform of f ( u / M ) {\displaystyle f(u/M)} cannot be expressed as a scaled version of f α ( u ) {\displaystyle f_{\alpha }(u)} . Rather, the fractional Fourier transform of f ( u / M ) {\displaystyle f(u/M)} turns out to be a scaled and chirp modulated version of f α ′ ( u ) {\displaystyle f_{\alpha '}(u)} where α ≠ α ′ {\displaystyle \alpha \neq \alpha '} is a different order. === Fractional kernel === The FRFT is an integral transform F α f ( u ) = ∫ K α ( u , x ) f ( x ) d x {\displaystyle {\mathcal {F}}_{\alpha }f(u)=\int K_{\alpha }(u,x)f(x)\,\mathrm {d} x} where the α-angle kernel is K α ( u , x ) = { 1 − i cot ⁡ ( α ) exp ⁡ ( i π ( cot ⁡ ( α ) ( x 2 + u 2 ) − 2 csc ⁡ ( α ) u x ) ) if α is not a multiple of π , δ ( u − x ) if α is a multiple of 2 π , δ ( u + x ) if α + π is a multiple of 2 π , {\displaystyle K_{\alpha }(u,x)={\begin{cases}{\sqrt {1-i\cot(\alpha )}}\exp \left(i\pi (\cot(\alpha )(x^{2}+u^{2})-2\csc(\alpha )ux)\right)&{\mbox{if }}\alpha {\mbox{ is not a multiple of }}\pi ,\\\delta (u-x)&{\mbox{if }}\alpha {\mbox{ is a multiple of }}2\pi ,\\\delta (u+x)&{\mbox{if }}\alpha +\pi {\mbox{ is a multiple of }}2\pi ,\\\end{cases}}} Here again the special cases are consistent with the limit behavior when α approaches a multiple of π. The FRFT has the same properties as its kernels : symmetry: K α ( u , u ′ ) = K α ( u ′ , u ) {\displaystyle K_{\alpha }~(u,u')=K_{\alpha }~(u',u)} inverse: K α − 1 ( u , u ′ ) = K α ∗ ( u , u ′ ) = K − α ( u ′ , u ) {\displaystyle K_{\alpha }^{-1}(u,u')=K_{\alpha }^{*}(u,u')=K_{-\alpha }(u',u)} additivity: K α + β ( u , u ′ ) = ∫ K α ( u , u ″ ) K β ( u ″ , u ′ ) d u ″ . {\displaystyle K_{\alpha +\beta }(u,u')=\int K_{\alpha }(u,u'')K_{\beta }(u'',u')\,\mathrm {d} u''.} === Related transforms === There also exist related fractional generalizations of similar transforms such as the discrete Fourier transform. The discrete fractional Fourier transform is defined by Zeev Zalevsky. A quantum algorithm to implement a version of the discrete fractional Fourier transform in sub-polynomial time is described by Somma. The Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. The chirplet transform for a related generalization of the wavelet transform. === Generalizations === The Fourier transform is essentially bosonic; it works because it is consistent with the superposition principle and related interference patterns. There is also a fermionic Fourier transform. These have been generalized into a supersymmetric FRFT, and a supersymmetric Radon transform. There is also a fractional Radon transform, a symplectic FRFT, and a symplectic wavelet transform. Because quantum circuits are based on unitary operations, they are useful for computing integral transforms as the latter are unitary operators on a function space. A quantum circuit has been designed which implements the FRFT. == Interpretation == === Time-frequency analysis === The usual interpretation of the Fourier transform is as a transformation of a time domain signal into a frequency domain signal. On the other hand, the interpretation of the inverse Fourier transform is as a transformation of a frequency domain signal into a time domain signal. Fractional Fourier transforms transform a signal (either in the time domain or frequency domain) into the domain between time and frequency: it is a rotation in the time–frequency domain. This perspective is generalized by the linear canonical transformation, which generalizes the fractional Fourier transform and allows linear transforms of the time–frequency domain other than rotation. Take the figure below as an example. If the signal in the time domain is rectangular (as below), it becomes a sinc function in the frequency domain. But if one applies the fractional Fourier transform to the rectangular signal, the transformation output will be in the domain between time and frequency. The fractional Fourier transform is a rotation operation on a time–frequency distribution. From the definition above, for α = 0, there will be no change after applying the fractional Fourier transform, while for α = π/2, the fractional Fourier transform becomes a plain Fourier transform, which rotates the time–frequency distribution with π/2. For other value of α, the fractional Fourier transform rotates the time–frequency distribution according to α. The following figure shows the results of the fractional Fourier transform with different values of α. === Fresnel and Fraunhofer diffraction === The diffraction of light can be calculated using integral transforms. The Fresnel diffraction integral is used to find the near field diffraction pattern. In the far-field limit this equation becomes a Fourier transform to give the equation for Fraunhofer diffraction. The fractional Fourier transform is equivalent to the Fresnel diffraction equation. When the angle α {\displaystyle \alpha } becomes π / 2 {\displaystyle \pi /2} , the fractional Fourier transform is the standard Fourier transform and gives the far-field diffraction pattern. The near-field diffraction maps to values of α {\displaystyle \alpha } between 0 and π / 2 {\displaystyle \pi /2} . == Application == Fractional Fourier transform can be used in time frequency analysis and DSP. It is useful to filter noise, but with the condition that it does not overlap with the desired signal in the time–frequency domain. Consider the following example. We cannot apply a filter directly to eliminate the noise, but with the help of the fractional Fourier transform, we can rotate the signal (including the desired signal and noise) first. We then apply a specific filter, which will allow only the desired signal to pass. Thus the noise will be removed completely. Then we use the fractional Fourier transform again to rotate the signal back and we can get the desired signal. Thus, using just truncation in the time domain, or equivalently low-pass filters in the frequency domain, one can cut out any convex set in time–frequency space. In contrast, using time domain or frequency domain tools without a fractional Fourier transform would only allow cutting out rectangles parallel to the axes. Fractional Fourier transforms also have applications in quantum physics. For example, they are used to formulate entropic uncertainty relations, in high-dimensional quantum key distribution schemes with single photons, and in observing spatial entanglement of photon pairs. They are also useful in the design of optical systems and for optimizing holographic storage efficiency. == See also == Least-squares spectral analysis Fractional calculus Mehler kernel Other time–frequency transforms: Linear canonical transformation Short-time Fourier transform Wavelet transform Chirplet transform Cone-shape distribution function Quadratic Fourier transform Chirp Z-transform == References == == Bibliography == Candan, C.; Kutay, M. A.; Ozaktas, H. M. (May 2000). "The discrete fractional Fourier transform" (PDF). IEEE Transactions on Signal Processing. 48 (5): 1329–1337. Bibcode:2000ITSP...48.1329C. doi:10.1109/78.839980. hdl:11693/11130. Ding, Jian-Jiun (2007). Time frequency analysis and wavelet transform (Class notes). Taipei, Taiwan: Department of Electrical Engineering, National Taiwan University (NTU). Lohmann, A. W. (1993). "Image rotation, Wigner rotation and the fractional Fourier transform". J. Opt. Soc. Am. A (10): 2181–2186. Bibcode:1993JOSAA..10.2181L. doi:10.1364/JOSAA.10.002181. Ozaktas, Haldun M.; Zalevsky, Zeev; Kutay, M. Alper (2001). The Fractional Fourier Transform with Applications in Optics and Signal Processing. Series in Pure and Applied Optics. John Wiley & Sons. ISBN 978-0-471-96346-2. Pei, Soo-Chang; Ding, Jian-Jiun (2001). "Relations between fractional operations and time–frequency distributions, and their applications". IEEE Trans. Signal Process. 49 (8): 1638–1655. Bibcode:2001ITSP...49.1638P. doi:10.1109/78.934134. Saxena, Rajiv; Singh, Kulbir (January–February 2005). "Fractional Fourier transform: A novel tool for signal processing" (PDF). J. Indian Inst. Sci. 85: 11–26. Archived from the original (PDF) on 16 July 2011. == External links == DiscreteTFDs -- software for computing the fractional Fourier transform and time–frequency distributions "Fractional Fourier Transform" by Enrique Zeleny, The Wolfram Demonstrations Project. Dr YangQuan Chen's FRFT (Fractional Fourier Transform) Webpages LTFAT - A free (GPL) Matlab / Octave toolbox Contains several version of the fractional Fourier transform Archived 4 March 2016 at the Wayback Machine.
Wikipedia/Fractional_Fourier_transform
In statistics, autoregressive fractionally integrated moving average models are time series models that generalize ARIMA (autoregressive integrated moving average) models by allowing non-integer values of the differencing parameter. These models are useful in modeling time series with long memory—that is, in which deviations from the long-run mean decay more slowly than an exponential decay. The acronyms "ARFIMA" or "FARIMA" are often used, although it is also conventional to simply extend the "ARIMA(p, d, q)" notation for models, by simply allowing the order of differencing, d, to take fractional values. Fractional differencing and the ARFIMA model were introduced in the early 1980s by Clive Granger, Roselyne Joyeux, and Jonathan Hosking. == Basics == In an ARIMA model, the integrated part of the model includes the differencing operator (1 − B) (where B is the backshift operator) raised to an integer power. For example, ( 1 − B ) 2 = 1 − 2 B + B 2 , {\displaystyle (1-B)^{2}=1-2B+B^{2}\,,} where B 2 X t = X t − 2 , {\displaystyle B^{2}X_{t}=X_{t-2}\,,} so that ( 1 − B ) 2 X t = X t − 2 X t − 1 + X t − 2 . {\displaystyle (1-B)^{2}X_{t}=X_{t}-2X_{t-1}+X_{t-2}.} In a fractional model, the power is allowed to be fractional, with the meaning of the term identified using the following formal binomial series expansion ( 1 − B ) d = ∑ k = 0 ∞ ( d k ) ( − B ) k = ∑ k = 0 ∞ ∏ a = 0 k − 1 ( d − a ) ( − B ) k k ! = 1 − d B + d ( d − 1 ) 2 ! B 2 − ⋯ . {\displaystyle {\begin{aligned}(1-B)^{d}&=\sum _{k=0}^{\infty }\;{d \choose k}\;(-B)^{k}\\&=\sum _{k=0}^{\infty }\;{\frac {\prod _{a=0}^{k-1}(d-a)\ (-B)^{k}}{k!}}\\&=1-dB+{\frac {d(d-1)}{2!}}B^{2}-\cdots \,.\end{aligned}}} == ARFIMA(0, d, 0) == The simplest autoregressive fractionally integrated model, ARFIMA(0, d, 0), is, in standard notation, ( 1 − B ) d X t = ε t , {\displaystyle (1-B)^{d}X_{t}=\varepsilon _{t},} where this has the interpretation X t − d X t − 1 + d ( d − 1 ) 2 ! X t − 2 − ⋯ = ε t . {\displaystyle X_{t}-dX_{t-1}+{\frac {d(d-1)}{2!}}X_{t-2}-\cdots =\varepsilon _{t}.} ARFIMA(0, d, 0) is similar to fractional Gaussian noise (fGn): with d = H−1⁄2, their covariances have the same power-law decay. The advantage of fGn over ARFIMA(0,d,0) is that many asymptotic relations hold for finite samples. The advantage of ARFIMA(0,d,0) over fGn is that it has an especially simple spectral density— f ( λ ) = 1 2 π ( 2 sin ⁡ ( λ 2 ) ) − 2 d {\displaystyle f(\lambda )={\frac {1}{2\pi }}\left(2\sin \left({\frac {\lambda }{2}}\right)\right)^{-2d}} —and it is a particular case of ARFIMA(p, d, q), which is a versatile family of models. == General form: ARFIMA(p, d, q) == An ARFIMA model shares the same form of representation as the ARIMA(p, d, q) process, specifically: ( 1 − ∑ i = 1 p ϕ i B i ) ( 1 − B ) d X t = ( 1 + ∑ i = 1 q θ i B i ) ε t . {\displaystyle \left(1-\sum _{i=1}^{p}\phi _{i}B^{i}\right)\left(1-B\right)^{d}X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}B^{i}\right)\varepsilon _{t}\,.} In contrast to the ordinary ARIMA process, the "difference parameter", d, is allowed to take non-integer values. == Enhancement to ordinary ARMA models == The enhancement to ordinary ARMA models is as follows: Take the original data series and high-pass filter it with fractional differencing enough to make the result stationary, and remember the order d of this fractional difference, d usually between 0 and 1 ... possibly up to 2+ in more extreme cases. Fractional difference of 2 is the 2nd derivative or 2nd difference. note: applying fractional differencing changes the units of the problem. If we started with Prices then take fractional differences, we no longer are in Price units. determining the order of differencing to make a time series stationary may be an iterative, exploratory process. Compute plain ARMA terms via the usual methods to fit to this stationary temporary data set which is in ersatz units. Forecast either to existing data (static forecast) or "ahead" (dynamic forecast, forward in time) with these ARMA terms. Apply the reverse filter operation (fractional integration to the same level d as in step 1) to the forecasted series, to return the forecast to the original problem units (e.g. turn the ersatz units back into Price). Fractional differencing and fractional integration are the same operation with opposite values of d: e.g. the fractional difference of a time series to d = 0.5 can be inverted (integrated) by applying the same fractional differencing operation (again) but with fraction d = -0.5. See GRETL fracdiff function. The point of the pre-filtering is to reduce low frequencies in the data set which can cause non-stationarities in the data, which non-stationarities ARMA models cannot handle well (or at all)... but only enough so that the reductions can be recovered after the model is built. Fractional differencing and the inverse operation fractional integration (both directions being used in the ARFIMA modeling and forecasting process) can be thought of as digital filtering and "unfiltering" operations. As such, it is useful to study the frequency response of such filters to know which frequencies are kept and which are attenuated or discarded. Note that any filtering that would substitute for fractional differencing and integration in this AR(FI)MA model should be similarly invertible as differencing and integration (summing) to avoid information loss. E.g. a high pass filter which completely discards many low frequencies (unlike the fractional differencing high pass filter which only completely discards frequency 0 [constant behavior in the input signal] and merely attenuates other low frequencies, see above PDF) may not work so well, because after fitting ARMA terms to the filtered series, the reverse operation to return the ARMA forecast to its original units would not be able re-boost those attenuated low frequencies, since the low frequencies were cut to zero. Such frequency response studies may suggest other similar families of (reversible) filters that might be useful replacements for the "FI" part of the ARFIMA modeling flow, such as the well-known, easy to implement, and minimal distortion high-pass Butterworth filter or similar. == See also == Fractional calculus — fractional differentiation Differintegral — fractional integration and differentiation Fractional Brownian motion — a continuous-time stochastic process with a similar basis Long-range dependency == References ==
Wikipedia/Autoregressive_fractionally_integrated_moving_average
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral B ( z 1 , z 2 ) = ∫ 0 1 t z 1 − 1 ( 1 − t ) z 2 − 1 d t {\displaystyle \mathrm {B} (z_{1},z_{2})=\int _{0}^{1}t^{z_{1}-1}(1-t)^{z_{2}-1}\,dt} for complex number inputs z 1 , z 2 {\displaystyle z_{1},z_{2}} such that Re ⁡ ( z 1 ) , Re ⁡ ( z 2 ) > 0 {\displaystyle \operatorname {Re} (z_{1}),\operatorname {Re} (z_{2})>0} . The beta function was studied by Leonhard Euler and Adrien-Marie Legendre and was given its name by Jacques Binet; its symbol Β is a Greek capital beta. == Properties == The beta function is symmetric, meaning that B ( z 1 , z 2 ) = B ( z 2 , z 1 ) {\displaystyle \mathrm {B} (z_{1},z_{2})=\mathrm {B} (z_{2},z_{1})} for all inputs z 1 {\displaystyle z_{1}} and z 2 {\displaystyle z_{2}} . A key property of the beta function is its close relationship to the gamma function: B ( z 1 , z 2 ) = Γ ( z 1 ) Γ ( z 2 ) Γ ( z 1 + z 2 ) {\displaystyle \mathrm {B} (z_{1},z_{2})={\frac {\Gamma (z_{1})\,\Gamma (z_{2})}{\Gamma (z_{1}+z_{2})}}} A proof is given below in § Relationship to the gamma function. The beta function is also closely related to binomial coefficients. When m (or n, by symmetry) is a positive integer, it follows from the definition of the gamma function Γ that B ( m , n ) = ( m − 1 ) ! ( n − 1 ) ! ( m + n − 1 ) ! = m + n m n / ( m + n m ) {\displaystyle \mathrm {B} (m,n)={\frac {(m-1)!\,(n-1)!}{(m+n-1)!}}={\frac {m+n}{mn}}{\Bigg /}{\binom {m+n}{m}}} == Relationship to the gamma function == To derive this relation, write the product of two factorials as integrals. Since they are integrals in two separate variables, we can combine them into an iterated integral: Γ ( z 1 ) Γ ( z 2 ) = ∫ u = 0 ∞ e − u u z 1 − 1 d u ⋅ ∫ v = 0 ∞ e − v v z 2 − 1 d v = ∫ v = 0 ∞ ∫ u = 0 ∞ e − u − v u z 1 − 1 v z 2 − 1 d u d v . {\displaystyle {\begin{aligned}\Gamma (z_{1})\Gamma (z_{2})&=\int _{u=0}^{\infty }\ e^{-u}u^{z_{1}-1}\,du\cdot \int _{v=0}^{\infty }\ e^{-v}v^{z_{2}-1}\,dv\\[6pt]&=\int _{v=0}^{\infty }\int _{u=0}^{\infty }\ e^{-u-v}u^{z_{1}-1}v^{z_{2}-1}\,du\,dv.\end{aligned}}} Changing variables by u = st and v = s(1 − t), because u + v = s and u / (u+v) = t, we have that the limits of integrations for s are 0 to ∞ and the limits of integration for t are 0 to 1. Thus produces Γ ( z 1 ) Γ ( z 2 ) = ∫ s = 0 ∞ ∫ t = 0 1 e − s ( s t ) z 1 − 1 ( s ( 1 − t ) ) z 2 − 1 s d t d s = ∫ s = 0 ∞ e − s s z 1 + z 2 − 1 d s ⋅ ∫ t = 0 1 t z 1 − 1 ( 1 − t ) z 2 − 1 d t = Γ ( z 1 + z 2 ) ⋅ B ( z 1 , z 2 ) . {\displaystyle {\begin{aligned}\Gamma (z_{1})\Gamma (z_{2})&=\int _{s=0}^{\infty }\int _{t=0}^{1}e^{-s}(st)^{z_{1}-1}(s(1-t))^{z_{2}-1}s\,dt\,ds\\[6pt]&=\int _{s=0}^{\infty }e^{-s}s^{z_{1}+z_{2}-1}\,ds\cdot \int _{t=0}^{1}t^{z_{1}-1}(1-t)^{z_{2}-1}\,dt\\&=\Gamma (z_{1}+z_{2})\cdot \mathrm {B} (z_{1},z_{2}).\end{aligned}}} Dividing both sides by Γ ( z 1 + z 2 ) {\displaystyle \Gamma (z_{1}+z_{2})} gives the desired result. The stated identity may be seen as a particular case of the identity for the integral of a convolution. Taking f ( u ) := e − u u z 1 − 1 1 R + g ( u ) := e − u u z 2 − 1 1 R + , {\displaystyle {\begin{aligned}f(u)&:=e^{-u}u^{z_{1}-1}1_{\mathbb {R} _{+}}\\g(u)&:=e^{-u}u^{z_{2}-1}1_{\mathbb {R} _{+}},\end{aligned}}} one has: Γ ( z 1 ) Γ ( z 2 ) = ∫ R f ( u ) d u ⋅ ∫ R g ( u ) d u = ∫ R ( f ∗ g ) ( u ) d u = B ( z 1 , z 2 ) Γ ( z 1 + z 2 ) . {\displaystyle \Gamma (z_{1})\Gamma (z_{2})=\int _{\mathbb {R} }f(u)\,du\cdot \int _{\mathbb {R} }g(u)\,du=\int _{\mathbb {R} }(f*g)(u)\,du=\mathrm {B} (z_{1},z_{2})\,\Gamma (z_{1}+z_{2}).} See The Gamma Function, page 18–19 for a derivation of this relation. == Differentiation of the beta function == We have ∂ ∂ z 1 B ( z 1 , z 2 ) = B ( z 1 , z 2 ) ( Γ ′ ( z 1 ) Γ ( z 1 ) − Γ ′ ( z 1 + z 2 ) Γ ( z 1 + z 2 ) ) = B ( z 1 , z 2 ) ( ψ ( z 1 ) − ψ ( z 1 + z 2 ) ) , {\displaystyle {\frac {\partial }{\partial z_{1}}}\mathrm {B} (z_{1},z_{2})=\mathrm {B} (z_{1},z_{2})\left({\frac {\Gamma '(z_{1})}{\Gamma (z_{1})}}-{\frac {\Gamma '(z_{1}+z_{2})}{\Gamma (z_{1}+z_{2})}}\right)=\mathrm {B} (z_{1},z_{2}){\big (}\psi (z_{1})-\psi (z_{1}+z_{2}){\big )},} ∂ ∂ z m B ( z 1 , z 2 , … , z n ) = B ( z 1 , z 2 , … , z n ) ( ψ ( z m ) − ψ ( ∑ k = 1 n z k ) ) , 1 ≤ m ≤ n , {\displaystyle {\frac {\partial }{\partial z_{m}}}\mathrm {B} (z_{1},z_{2},\dots ,z_{n})=\mathrm {B} (z_{1},z_{2},\dots ,z_{n})\left(\psi (z_{m})-\psi \left(\sum _{k=1}^{n}z_{k}\right)\right),\quad 1\leq m\leq n,} where ψ ( z ) {\displaystyle \psi (z)} denotes the digamma function. == Approximation == Stirling's approximation gives the asymptotic formula B ( x , y ) ∼ 2 π x x − 1 / 2 y y − 1 / 2 ( x + y ) x + y − 1 / 2 {\displaystyle \mathrm {B} (x,y)\sim {\sqrt {2\pi }}{\frac {x^{x-1/2}y^{y-1/2}}{({x+y})^{x+y-1/2}}}} for large x and large y. If on the other hand x is large and y is fixed, then B ( x , y ) ∼ Γ ( y ) x − y . {\displaystyle \mathrm {B} (x,y)\sim \Gamma (y)\,x^{-y}.} == Other identities and formulas == The integral defining the beta function may be rewritten in a variety of ways, including the following: B ( z 1 , z 2 ) = 2 ∫ 0 π / 2 ( sin ⁡ θ ) 2 z 1 − 1 ( cos ⁡ θ ) 2 z 2 − 1 d θ , = ∫ 0 ∞ t z 1 − 1 ( 1 + t ) z 1 + z 2 d t , = n ∫ 0 1 t n z 1 − 1 ( 1 − t n ) z 2 − 1 d t , = ( 1 − a ) z 2 ∫ 0 1 ( 1 − t ) z 1 − 1 t z 2 − 1 ( 1 − a t ) z 1 + z 2 d t for any a ∈ R ≤ 1 , {\displaystyle {\begin{aligned}\mathrm {B} (z_{1},z_{2})&=2\int _{0}^{\pi /2}(\sin \theta )^{2z_{1}-1}(\cos \theta )^{2z_{2}-1}\,d\theta ,\\[6pt]&=\int _{0}^{\infty }{\frac {t^{z_{1}-1}}{(1+t)^{z_{1}+z_{2}}}}\,dt,\\[6pt]&=n\int _{0}^{1}t^{nz_{1}-1}(1-t^{n})^{z_{2}-1}\,dt,\\&=(1-a)^{z_{2}}\int _{0}^{1}{\frac {(1-t)^{z_{1}-1}t^{z_{2}-1}}{(1-at)^{z_{1}+z_{2}}}}dt\qquad {\text{for any }}a\in \mathbb {R} _{\leq 1},\end{aligned}}} where in the second-to-last identity n is any positive real number. One may move from the first integral to the second one by substituting t = tan 2 ⁡ ( θ ) {\displaystyle t=\tan ^{2}(\theta )} . For values z = z 1 = z 2 ≠ 1 {\displaystyle z=z_{1}=z_{2}\neq 1} we have: B ( z , z ) = 1 z ∫ 0 π / 2 1 ( sin ⁡ θ z + cos ⁡ θ z ) 2 z d θ {\displaystyle \mathrm {B} (z,z)={\frac {1}{z}}\int _{0}^{\pi /2}{\frac {1}{({\sqrt[{z}]{\sin \theta }}+{\sqrt[{z}]{\cos \theta }})^{2z}}}\,d\theta } The beta function can be written as an infinite sum B ( x , y ) = ∑ n = 0 ∞ ( 1 − x ) n ( y + n ) n ! {\displaystyle \mathrm {B} (x,y)=\sum _{n=0}^{\infty }{\frac {(1-x)_{n}}{(y+n)\,n!}}} If x {\displaystyle x} and y {\displaystyle y} are equal to a number z {\displaystyle z} we get: B ( z , z ) = 2 ∑ n = 0 ∞ ( 2 z + n − 1 ) n ( − 1 ) n ( z + n ) n ! = lim x → 1 − 2 ∑ n = 0 ∞ ( − 2 z ) n x n ( z + n ) n ! {\displaystyle \mathrm {B} (z,z)=2\sum _{n=0}^{\infty }{\frac {(2z+n-1)_{n}(-1)^{n}}{(z+n)n!}}=\lim _{x\to 1^{-}}2\sum _{n=0}^{\infty }{\frac {(-2z)_{n}x^{n}}{(z+n)n!}}} (where ( x ) n {\displaystyle (x)_{n}} is the rising factorial) and as an infinite product B ( x , y ) = x + y x y ∏ n = 1 ∞ ( 1 + x y n ( x + y + n ) ) − 1 . {\displaystyle \mathrm {B} (x,y)={\frac {x+y}{xy}}\prod _{n=1}^{\infty }\left(1+{\dfrac {xy}{n(x+y+n)}}\right)^{-1}.} The beta function satisfies several identities analogous to corresponding identities for binomial coefficients, including a version of Pascal's identity B ( x , y ) = B ( x , y + 1 ) + B ( x + 1 , y ) {\displaystyle \mathrm {B} (x,y)=\mathrm {B} (x,y+1)+\mathrm {B} (x+1,y)} and a simple recurrence on one coordinate: B ( x + 1 , y ) = B ( x , y ) ⋅ x x + y , B ( x , y + 1 ) = B ( x , y ) ⋅ y x + y . {\displaystyle \mathrm {B} (x+1,y)=\mathrm {B} (x,y)\cdot {\dfrac {x}{x+y}},\quad \mathrm {B} (x,y+1)=\mathrm {B} (x,y)\cdot {\dfrac {y}{x+y}}.} The positive integer values of the beta function are also the partial derivatives of a 2D function: for all nonnegative integers m {\displaystyle m} and n {\displaystyle n} , B ( m + 1 , n + 1 ) = ∂ m + n h ∂ a m ∂ b n ( 0 , 0 ) , {\displaystyle \mathrm {B} (m+1,n+1)={\frac {\partial ^{m+n}h}{\partial a^{m}\,\partial b^{n}}}(0,0),} where h ( a , b ) = e a − e b a − b . {\displaystyle h(a,b)={\frac {e^{a}-e^{b}}{a-b}}.} The Pascal-like identity above implies that this function is a solution to the first-order partial differential equation h = h a + h b . {\displaystyle h=h_{a}+h_{b}.} For x , y ≥ 1 {\displaystyle x,y\geq 1} , the beta function may be written in terms of a convolution involving the truncated power function t ↦ t + x {\displaystyle t\mapsto t_{+}^{x}} : B ( x , y ) ⋅ ( t ↦ t + x + y − 1 ) = ( t ↦ t + x − 1 ) ∗ ( t ↦ t + y − 1 ) {\displaystyle \mathrm {B} (x,y)\cdot \left(t\mapsto t_{+}^{x+y-1}\right)={\Big (}t\mapsto t_{+}^{x-1}{\Big )}*{\Big (}t\mapsto t_{+}^{y-1}{\Big )}} Evaluations at particular points may simplify significantly; for example, B ( 1 , x ) = 1 x {\displaystyle \mathrm {B} (1,x)={\dfrac {1}{x}}} and B ( x , 1 − x ) = π sin ⁡ ( π x ) , x ∉ Z {\displaystyle \mathrm {B} (x,1-x)={\dfrac {\pi }{\sin(\pi x)}},\qquad x\not \in \mathbb {Z} } By taking x = 1 2 {\displaystyle x={\frac {1}{2}}} in this last formula, it follows that Γ ( 1 / 2 ) = π {\displaystyle \Gamma (1/2)={\sqrt {\pi }}} . Generalizing this into a bivariate identity for a product of beta functions leads to: B ( x , y ) ⋅ B ( x + y , 1 − y ) = π x sin ⁡ ( π y ) . {\displaystyle \mathrm {B} (x,y)\cdot \mathrm {B} (x+y,1-y)={\frac {\pi }{x\sin(\pi y)}}.} Euler's integral for the beta function may be converted into an integral over the Pochhammer contour C as ( 1 − e 2 π i α ) ( 1 − e 2 π i β ) B ( α , β ) = ∫ C t α − 1 ( 1 − t ) β − 1 d t . {\displaystyle \left(1-e^{2\pi i\alpha }\right)\left(1-e^{2\pi i\beta }\right)\mathrm {B} (\alpha ,\beta )=\int _{C}t^{\alpha -1}(1-t)^{\beta -1}\,dt.} This Pochhammer contour integral converges for all values of α and β and so gives the analytic continuation of the beta function. Just as the gamma function for integers describes factorials, the beta function can define a binomial coefficient after adjusting indices: ( n k ) = 1 ( n + 1 ) B ( n − k + 1 , k + 1 ) . {\displaystyle {\binom {n}{k}}={\frac {1}{(n+1)\,\mathrm {B} (n-k+1,k+1)}}.} Moreover, for integer n, Β can be factored to give a closed form interpolation function for continuous values of k: ( n k ) = ( − 1 ) n n ! ⋅ sin ⁡ ( π k ) π ∏ i = 0 n ( k − i ) . {\displaystyle {\binom {n}{k}}=(-1)^{n}\,n!\cdot {\frac {\sin(\pi k)}{\pi \displaystyle \prod _{i=0}^{n}(k-i)}}.} == Reciprocal beta function == The reciprocal beta function is the function about the form f ( x , y ) = 1 B ( x , y ) {\displaystyle f(x,y)={\frac {1}{\mathrm {B} (x,y)}}} Interestingly, their integral representations closely relate as the definite integral of trigonometric functions with product of its power and multiple-angle: ∫ 0 π sin x − 1 ⁡ θ sin ⁡ y θ d θ = π sin ⁡ y π 2 2 x − 1 x B ( x + y + 1 2 , x − y + 1 2 ) {\displaystyle \int _{0}^{\pi }\sin ^{x-1}\theta \sin y\theta ~d\theta ={\frac {\pi \sin {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}} ∫ 0 π sin x − 1 ⁡ θ cos ⁡ y θ d θ = π cos ⁡ y π 2 2 x − 1 x B ( x + y + 1 2 , x − y + 1 2 ) {\displaystyle \int _{0}^{\pi }\sin ^{x-1}\theta \cos y\theta ~d\theta ={\frac {\pi \cos {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}} ∫ 0 π cos x − 1 ⁡ θ sin ⁡ y θ d θ = π cos ⁡ y π 2 2 x − 1 x B ( x + y + 1 2 , x − y + 1 2 ) {\displaystyle \int _{0}^{\pi }\cos ^{x-1}\theta \sin y\theta ~d\theta ={\frac {\pi \cos {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}} ∫ 0 π 2 cos x − 1 ⁡ θ cos ⁡ y θ d θ = π 2 x x B ( x + y + 1 2 , x − y + 1 2 ) {\displaystyle \int _{0}^{\frac {\pi }{2}}\cos ^{x-1}\theta \cos y\theta ~d\theta ={\frac {\pi }{2^{x}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}} == Incomplete beta function == The incomplete beta function, a generalization of the beta function, is defined as B ( x ; a , b ) = ∫ 0 x t a − 1 ( 1 − t ) b − 1 d t . {\displaystyle \mathrm {B} (x;\,a,b)=\int _{0}^{x}t^{a-1}\,(1-t)^{b-1}\,dt.} For x = 1, the incomplete beta function coincides with the complete beta function. For positive integers a and b, the incomplete beta function will be a polynomial of degree a + b - 1 with rational coefficients. By the substitution t = sin 2 ⁡ θ {\displaystyle t=\sin ^{2}\theta } and t = 1 1 + s {\displaystyle t={\frac {1}{1+s}}} , we can show that B ( x ; a , b ) = 2 ∫ 0 arcsin ⁡ x sin 2 a − 1 ⁡ θ cos 2 b − 1 ⁡ θ d θ = ∫ 1 − x x ∞ s b − 1 ( 1 + s ) a + b d s {\displaystyle \mathrm {B} (x;\,a,b)=2\int _{0}^{\arcsin {\sqrt {x}}}\sin ^{2a-1\!}\theta \cos ^{2b-1\!}\theta \,\mathrm {d} \theta =\int _{\frac {1-x}{x}}^{\infty }{\frac {s^{b-1}}{(1+s)^{a+b}}}\,\mathrm {d} s} The regularized incomplete beta function (or regularized beta function for short) is defined in terms of the incomplete beta function and the complete beta function: I x ( a , b ) = B ( x ; a , b ) B ( a , b ) . {\displaystyle I_{x}(a,b)={\frac {\mathrm {B} (x;\,a,b)}{\mathrm {B} (a,b)}}.} The regularized incomplete beta function is the cumulative distribution function of the beta distribution, and is related to the cumulative distribution function F ( k ; n , p ) {\displaystyle F(k;\,n,p)} of a random variable X following a binomial distribution with probability of single success p and number of Bernoulli trials n: F ( k ; n , p ) = Pr ( X ≤ k ) = I 1 − p ( n − k , k + 1 ) = 1 − I p ( k + 1 , n − k ) . {\displaystyle F(k;\,n,p)=\Pr \left(X\leq k\right)=I_{1-p}(n-k,k+1)=1-I_{p}(k+1,n-k).} === Properties === I 0 ( a , b ) = 0 I 1 ( a , b ) = 1 I x ( a , 1 ) = x a I x ( 1 , b ) = 1 − ( 1 − x ) b I x ( a , b ) = 1 − I 1 − x ( b , a ) I x ( a + 1 , b ) = I x ( a , b ) − x a ( 1 − x ) b a B ( a , b ) I x ( a , b + 1 ) = I x ( a , b ) + x a ( 1 − x ) b b B ( a , b ) ∫ B ( x ; a , b ) d x = x B ( x ; a , b ) − B ( x ; a + 1 , b ) B ( x ; a , b ) = ( − 1 ) a B ( x x − 1 ; a , 1 − a − b ) {\displaystyle {\begin{aligned}I_{0}(a,b)&=0\\I_{1}(a,b)&=1\\I_{x}(a,1)&=x^{a}\\I_{x}(1,b)&=1-(1-x)^{b}\\I_{x}(a,b)&=1-I_{1-x}(b,a)\\I_{x}(a+1,b)&=I_{x}(a,b)-{\frac {x^{a}(1-x)^{b}}{a\mathrm {B} (a,b)}}\\I_{x}(a,b+1)&=I_{x}(a,b)+{\frac {x^{a}(1-x)^{b}}{b\mathrm {B} (a,b)}}\\\int \mathrm {B} (x;a,b)\mathrm {d} x&=x\mathrm {B} (x;a,b)-\mathrm {B} (x;a+1,b)\\\mathrm {B} (x;a,b)&=(-1)^{a}\mathrm {B} \left({\frac {x}{x-1}};a,1-a-b\right)\end{aligned}}} === Continued fraction expansion === The continued fraction expansion B ( x ; a , b ) = x a ( 1 − x ) b a ( 1 + d 1 1 + d 2 1 + d 3 1 + d 4 1 + ⋯ ) {\displaystyle \mathrm {B} (x;\,a,b)={\frac {x^{a}(1-x)^{b}}{a\left(1+{\frac {{d}_{1}}{1+}}{\frac {{d}_{2}}{1+}}{\frac {{d}_{3}}{1+}}{\frac {{d}_{4}}{1+}}\cdots \right)}}} with odd and even coefficients respectively d 2 m + 1 = − ( a + m ) ( a + b + m ) x ( a + 2 m ) ( a + 2 m + 1 ) {\displaystyle {d}_{2m+1}=-{\frac {(a+m)(a+b+m)x}{(a+2m)(a+2m+1)}}} d 2 m = m ( b − m ) x ( a + 2 m − 1 ) ( a + 2 m ) {\displaystyle {d}_{2m}={\frac {m(b-m)x}{(a+2m-1)(a+2m)}}} converges rapidly when x {\displaystyle x} is not close to 1. The 4 m {\displaystyle 4m} and 4 m + 1 {\displaystyle 4m+1} convergents are less than B ( x ; a , b ) {\displaystyle \mathrm {B} (x;\,a,b)} , while the 4 m + 2 {\displaystyle 4m+2} and 4 m + 3 {\displaystyle 4m+3} convergents are greater than B ( x ; a , b ) {\displaystyle \mathrm {B} (x;\,a,b)} . For x > a + 1 a + b + 2 {\displaystyle x>{\frac {a+1}{a+b+2}}} , the function may be evaluated more efficiently using B ( x ; a , b ) = B ( a , b ) − B ( 1 − x ; b , a ) {\displaystyle \mathrm {B} (x;\,a,b)=\mathrm {B} (a,b)-\mathrm {B} (1-x;\,b,a)} . == Multivariate beta function == The beta function can be extended to a function with more than two arguments: B ( α 1 , α 2 , … α n ) = Γ ( α 1 ) Γ ( α 2 ) ⋯ Γ ( α n ) Γ ( α 1 + α 2 + ⋯ + α n ) . {\displaystyle \mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n})={\frac {\Gamma (\alpha _{1})\,\Gamma (\alpha _{2})\cdots \Gamma (\alpha _{n})}{\Gamma (\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n})}}.} This multivariate beta function is used in the definition of the Dirichlet distribution. Its relationship to the beta function is analogous to the relationship between multinomial coefficients and binomial coefficients. For example, it satisfies a similar version of Pascal's identity: B ( α 1 , α 2 , … α n ) = B ( α 1 + 1 , α 2 , … α n ) + B ( α 1 , α 2 + 1 , … α n ) + ⋯ + B ( α 1 , α 2 , … α n + 1 ) . {\displaystyle \mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n})=\mathrm {B} (\alpha _{1}+1,\alpha _{2},\ldots \alpha _{n})+\mathrm {B} (\alpha _{1},\alpha _{2}+1,\ldots \alpha _{n})+\cdots +\mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n}+1).} == Applications == The beta function is useful in computing and representing the scattering amplitude for Regge trajectories. Furthermore, it was the first known scattering amplitude in string theory, first conjectured by Gabriele Veneziano. It also occurs in the theory of the preferential attachment process, a type of stochastic urn process. The beta function is also important in statistics, e.g. for the beta distribution and beta prime distribution. As briefly alluded to previously, the beta function is closely tied with the gamma function and plays an important role in calculus. == Software implementation == Even if unavailable directly, the complete and incomplete beta function values can be calculated using functions commonly included in spreadsheet or computer algebra systems. In Microsoft Excel, for example, the complete beta function can be computed with the GammaLn function (or special.gammaln in Python's SciPy package): Value = Exp(GammaLn(a) + GammaLn(b) − GammaLn(a + b)) This result follows from the properties listed above. The incomplete beta function cannot be directly computed using such relations and other methods must be used. In GNU Octave, it is computed using a continued fraction expansion. The incomplete beta function has existing implementation in common languages. For instance, betainc (incomplete beta function) in MATLAB and GNU Octave, pbeta (probability of beta distribution) in R and betainc in SymPy. In SciPy, special.betainc computes the regularized incomplete beta function—which is, in fact, the cumulative beta distribution. To get the actual incomplete beta function, one can multiply the result of special.betainc by the result returned by the corresponding beta function. In Mathematica, Beta[x, a, b] and BetaRegularized[x, a, b] give B ( x ; a , b ) {\displaystyle \mathrm {B} (x;\,a,b)} and I x ( a , b ) {\displaystyle I_{x}(a,b)} , respectively. == See also == Beta distribution and Beta prime distribution, two probability distributions related to the beta function Jacobi sum, the analogue of the beta function over finite fields. Nørlund–Rice integral Yule–Simon distribution == References == Askey, R. A.; Roy, R. (2010), "Beta function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Press, W. H.; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.1 Gamma Function, Beta Function, Factorials", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2021-10-27, retrieved 2011-08-09 == External links == "Beta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Evaluation of beta function using Laplace transform at PlanetMath. Arbitrarily accurate values can be obtained from: The Wolfram functions site: Evaluate Beta Regularized incomplete beta danielsoper.com: Incomplete beta function calculator, Regularized incomplete beta function calculator
Wikipedia/Beta_function
In mathematical analysis, initialization of the differintegrals is a topic in fractional calculus, a branch of mathematics dealing with derivatives of non-integer order. == Composition rule of Differintegrals == The composition law of the differintegral operator states that although: D q D − q = I {\displaystyle \mathbb {D} ^{q}\mathbb {D} ^{-q}=\mathbb {I} } wherein D−q is the left inverse of Dq, the converse is not necessarily true: D − q D q ≠ I {\displaystyle \mathbb {D} ^{-q}\mathbb {D} ^{q}\neq \mathbb {I} } === Example === Consider elementary integer-order calculus. Below is an integration and differentiation using the example function 3 x 2 + 1 {\displaystyle 3x^{2}+1} : d d x [ ∫ ( 3 x 2 + 1 ) d x ] = d d x [ x 3 + x + C ] = 3 x 2 + 1 , {\displaystyle {\frac {d}{dx}}\left[\int (3x^{2}+1)dx\right]={\frac {d}{dx}}[x^{3}+x+C]=3x^{2}+1\,,} Now, on exchanging the order of composition: ∫ [ d d x ( 3 x 2 + 1 ) ] = ∫ 6 x d x = 3 x 2 + C , {\displaystyle \int \left[{\frac {d}{dx}}(3x^{2}+1)\right]=\int 6x\,dx=3x^{2}+C\,,} Where C is the constant of integration. Even if it was not obvious, the initialized condition ƒ'(0) = C, ƒ''(0) = D, etc. could be used. If we neglected those initialization terms, the last equation would show the composition of integration, and differentiation (and vice versa) would not hold. == Description of initialization == Working with a properly initialized differ integral is the subject of initialized fractional calculus. If the differ integral is initialized properly, then the hoped-for composition law holds. The problem is that in differentiation, information is lost, as with C in the first equation. However, in fractional calculus, given that the operator has been fractionalized and is thus continuous, an entire complementary function is needed. This is called complementary function Ψ {\displaystyle \Psi } . D t q f ( t ) = 1 Γ ( n − q ) d n d t n ∫ 0 t ( t − τ ) n − q − 1 f ( τ ) d τ + Ψ ( x ) {\displaystyle \mathbb {D} _{t}^{q}f(t)={\frac {1}{\Gamma (n-q)}}{\frac {d^{n}}{dt^{n}}}\int _{0}^{t}(t-\tau )^{n-q-1}f(\tau )\,d\tau +\Psi (x)} == See also == Initial conditions Dynamical systems == References == Lorenzo, Carl F.; Hartley, Tom T. (2000), Initialized Fractional Calculus (PDF), NASA (technical report).
Wikipedia/Initialized_fractional_calculus
In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the continuuum (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface. At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram. == Overview == Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model. One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system. In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs. The control volumes can be stationary or they can move with an arbitrary velocity. == Substantive derivative == Computations in continuum mechanics often require that the regular time derivation operator d / d t {\displaystyle d/dt\;} is replaced by the substantive derivative operator D / D t {\displaystyle D/Dt} . This can be seen as follows. Consider a bug that is moving through a volume where there is some scalar, e.g. pressure, that varies with time and position: p = p ( t , x , y , z ) {\displaystyle p=p(t,x,y,z)\;} . If the bug during the time interval from t {\displaystyle t\;} to t + d t {\displaystyle t+dt\;} moves from ( x , y , z ) {\displaystyle (x,y,z)\;} to ( x + d x , y + d y , z + d z ) , {\displaystyle (x+dx,y+dy,z+dz),\;} then the bug experiences a change d p {\displaystyle dp\;} in the scalar value, d p = ∂ p ∂ t d t + ∂ p ∂ x d x + ∂ p ∂ y d y + ∂ p ∂ z d z {\displaystyle dp={\frac {\partial p}{\partial t}}dt+{\frac {\partial p}{\partial x}}dx+{\frac {\partial p}{\partial y}}dy+{\frac {\partial p}{\partial z}}dz} (the total differential). If the bug is moving with a velocity v = ( v x , v y , v z ) , {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z}),} the change in particle position is v d t = ( v x d t , v y d t , v z d t ) , {\displaystyle \mathbf {v} dt=(v_{x}dt,v_{y}dt,v_{z}dt),} and we may write d p = ∂ p ∂ t d t + ∂ p ∂ x v x d t + ∂ p ∂ y v y d t + ∂ p ∂ z v z d t = ( ∂ p ∂ t + ∂ p ∂ x v x + ∂ p ∂ y v y + ∂ p ∂ z v z ) d t = ( ∂ p ∂ t + v ⋅ ∇ p ) d t . {\displaystyle {\begin{alignedat}{2}dp&={\frac {\partial p}{\partial t}}dt+{\frac {\partial p}{\partial x}}v_{x}dt+{\frac {\partial p}{\partial y}}v_{y}dt+{\frac {\partial p}{\partial z}}v_{z}dt\\&=\left({\frac {\partial p}{\partial t}}+{\frac {\partial p}{\partial x}}v_{x}+{\frac {\partial p}{\partial y}}v_{y}+{\frac {\partial p}{\partial z}}v_{z}\right)dt\\&=\left({\frac {\partial p}{\partial t}}+\mathbf {v} \cdot \nabla p\right)dt.\\\end{alignedat}}} where ∇ p {\displaystyle \nabla p} is the gradient of the scalar field p. So: d d t = ∂ ∂ t + v ⋅ ∇ . {\displaystyle {\frac {d}{dt}}={\frac {\partial }{\partial t}}+\mathbf {v} \cdot \nabla .} If the bug is just moving with the flow, the same formula applies, but now the velocity vector,v, is that of the flow, u. The last parenthesized expression is the substantive derivative of the scalar pressure. Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as D D t = ∂ ∂ t + u ⋅ ∇ . {\displaystyle {\frac {D}{Dt}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla .} == See also == Continuum mechanics Cauchy momentum equation Special relativity Substantive derivative == References == James R. Welty, Charles E. Wicks, Robert E. Wilson & Gregory Rorrer Fundamentals of Momentum, Heat, and Mass Transfer ISBN 0-471-38149-7 === Notes === == External links == === PDFs === Integral Approach to the Control Volume analysis of Fluid Flow
Wikipedia/Control_volume
Prabhakar function is a certain special function in mathematics introduced by the Indian mathematician Tilak Raj Prabhakar in a paper published in 1971. The function is a three-parameter generalization of the well known two-parameter Mittag-Leffler function in mathematics. The function was originally introduced to solve certain classes of integral equations. Later the function was found to have applications in the theory of fractional calculus and also in certain areas of physics. == Definition == The one-parameter and two-parameter Mittag-Leffler functions are defined first. Then the definition of the three-parameter Mittag-Leffler function, the Prabhakar function, is presented. In the following definitions, Γ ( z ) {\displaystyle \Gamma (z)} is the well known gamma function defined by Γ ( z ) = ∫ 0 ∞ t z − 1 e − z d z , ℜ ( z ) > 0 {\displaystyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-z}\,dz,\quad \Re (z)>0} . In the following it will be assumed that α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } are all complex numbers. === One-parameter Mittag-Leffler function === The one-parameter Mittag-Leffler function is defined as E α ( z ) = ∑ n = 0 ∞ z n Γ ( α n + 1 ) . {\displaystyle E_{\alpha }(z)=\sum _{n=0}^{\infty }{\dfrac {z^{n}}{\Gamma (\alpha n+1)}}.} === Two-parameter Mittag-Leffler function === The two-parameter Mittag-Leffler function is defined as E α , β ( z ) = ∑ n = 0 ∞ z n Γ ( α n + β ) , ℜ ( α ) > 0. {\displaystyle E_{\alpha ,\beta }(z)=\sum _{n=0}^{\infty }{\dfrac {z^{n}}{\Gamma (\alpha n+\beta )}},\quad \Re (\alpha )>0.} === Three-parameter Mittag-Leffler function (Prabhakar function) === The three-parameter Mittag-Leffler function (Prabhakar function) is defined by E α , β γ ( z ) = ∑ n = 0 ∞ ( γ ) n n ! Γ ( α n + β ) z n , ℜ ( α ) > 0 {\displaystyle E_{\alpha ,\beta }^{\gamma }(z)=\sum _{n=0}^{\infty }{\dfrac {(\gamma )_{n}}{n!\Gamma (\alpha n+\beta )}}z^{n},\quad \Re (\alpha )>0} where ( γ ) n = γ ( γ + 1 ) … ( γ + n − 1 ) {\displaystyle (\gamma )_{n}=\gamma (\gamma +1)\ldots (\gamma +n-1)} . == Elementary special cases == The following special cases immediately follow from the definition. E α , β 0 ( z ) = 1 Γ ( β ) {\displaystyle E_{\alpha ,\beta }^{0}(z)={\frac {1}{\Gamma (\beta )}}} E α , β 1 ( z ) = E α , β ( z ) {\displaystyle E_{\alpha ,\beta }^{1}(z)=E_{\alpha ,\beta }(z)} , the two-parameter Mittag-Leffler function. E α , 1 1 ( z ) = E α ( z ) {\displaystyle E_{\alpha ,1}^{1}(z)=E_{\alpha }(z)} , the one-parameter Mittag-Leffler function. E 1 , 1 1 ( z ) = e z {\displaystyle E_{1,1}^{1}(z)=e^{z}} , the classical exponential function. == Properties == === Reduction formula === The following formula can be reduced to lower the value of the third parameter γ {\displaystyle \gamma } . E α , β γ + 1 ( z ) = 1 α γ [ E α , β − 1 γ ( z ) + ( 1 − β + α γ ) E α , β γ ( z ) ] {\displaystyle E_{\alpha ,\beta }^{\gamma +1}(z)={\frac {1}{\alpha \gamma }}{\big [}E_{\alpha ,\beta -1}^{\gamma }(z)+(1-\beta +\alpha \gamma )E_{\alpha ,\beta }^{\gamma }(z){\big ]}} === Relation with Fox–Wright function === The Prabhakar function is related to the Fox–Wright function by the following relation: E α , β γ ( z ) = 1 Γ ( γ ) 1 Ψ 1 ( ( γ , 1 ) ( β , α ) ; z ) {\displaystyle E_{\alpha ,\beta }^{\gamma }(z)={\frac {1}{\Gamma (\gamma )}}{}_{1}\Psi _{1}\left({\begin{matrix}\left(\gamma ,1\right)\\(\beta ,\alpha )\end{matrix}};z\right)} === Derivatives === The derivative of the Prabhakar function is given by d d z ( E α , β γ ( z ) ) = 1 α z [ E α , β − 1 γ ( z ) + ( 1 − β ) E α , β γ ] {\displaystyle {\frac {d}{dz}}\left(E_{\alpha ,\beta }^{\gamma }(z)\right)={\frac {1}{\alpha z}}{\big [}E_{\alpha ,\beta -1}^{\gamma }(z)+(1-\beta )E_{\alpha ,\beta }^{\gamma }{\big ]}} There is a general expression for higher order derivatives. Let m {\displaystyle m} be a positive integer. The m {\displaystyle m} -th derivative of the Prabhakar function is given by d m d z m ( E α , β γ ( z ) ) = Γ ( γ + m ) Γ ( γ ) E α , m α + β γ + m ( z ) {\displaystyle {\frac {d^{m}}{dz^{m}}}\left(E_{\alpha ,\beta }^{\gamma }(z)\right)={\frac {\Gamma (\gamma +m)}{\Gamma (\gamma )}}E_{\alpha ,m\alpha +\beta }^{\gamma +m}(z)} The following result is useful in applications. d m d z m ( t β − 1 E α , β γ ( t α z ) ) = t β − m − 1 E α , β − m γ ( t α z ) {\displaystyle {\frac {d^{m}}{dz^{m}}}\left(t^{\beta -1}E_{\alpha ,\beta }^{\gamma }(t^{\alpha }z)\right)=t^{\beta -m-1}E_{\alpha ,\beta -m}^{\gamma }(t^{\alpha }z)} === Integrals === The following result involving Prabhakar function is known. ∫ 0 t τ β − 1 E α , β γ ( τ α z ) = t β E α , β + 1 γ ( t α z ) {\displaystyle \int _{0}^{t}\tau ^{\beta -1}E_{\alpha ,\beta }^{\gamma }(\tau ^{\alpha }z)=t^{\beta }E_{\alpha ,\beta +1}^{\gamma }(t^{\alpha }z)} === Laplace transforms === The following result involving Laplace transforms plays an important role in both physical applications and numerical computations of the Prabhakar function. L [ t β − 1 E α , β γ ( t α z ) ; s ] = s α γ − β ( s α − z ) γ , ℜ ( s ) > 0 , | s | > | z | 1 / α {\displaystyle L\left[t^{\beta -1}E_{\alpha ,\beta }^{\gamma }(t^{\alpha }z)\,;\,s\right]={\frac {s^{\alpha \gamma -\beta }}{(s^{\alpha }-z)^{\gamma }}},\quad \Re (s)>0,\quad |s|>|z|^{1/\alpha }} == Prabhakar fractional calculus == The following function is known as the Prabhakar kernel in the literature. e α , β γ ( t ; λ ) = t β − 1 E α , β γ ( λ t α ) {\displaystyle e_{\alpha ,\beta }^{\gamma }(t;\lambda )=t^{\beta -1}E_{\alpha ,\beta }^{\gamma }(\lambda t^{\alpha })} Given any function f ( t ) {\displaystyle f(t)} , the convolution of the Prabhakar kernel and f ( t ) {\displaystyle f(t)} is called the Prabhakar fractional integral: ∫ t 0 t ( t − u ) β − 1 E α , β γ ( λ ( t − u ) α ) f ( u ) d u {\displaystyle \int _{t_{0}}^{t}(t-u)^{\beta -1}E_{\alpha ,\beta }^{\gamma }\left(\lambda (t-u)^{\alpha }\right)f(u)\,du} Properties of the Prabhakar fractional integral have been extensively studied in the literature. == References ==
Wikipedia/Prabhakar_function
Proportional control, in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value (setpoint, SP) and the measured value (process variable, PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor. The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat, but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control. On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain. A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output. == Theory == In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain. This can be mathematically expressed as P o u t = K p e ( t ) + p 0 {\displaystyle P_{\mathrm {out} }=K_{p}\,{e(t)+p0}} where p 0 {\displaystyle p0} : Controller output with zero error. P o u t {\displaystyle P_{\mathrm {out} }} : Output of the proportional controller K p {\displaystyle K_{p}} : Proportional gain e ( t ) {\displaystyle e(t)} : Instantaneous process error at time t. e ( t ) = S P − P V {\displaystyle e(t)=SP-PV} S P {\displaystyle SP} : Set point P V {\displaystyle PV} : Process variable Constraints: In a real plant, actuators have physical limitations that can be expressed as constraints on P o u t {\displaystyle P_{\mathrm {out} }} . For example, P o u t {\displaystyle P_{\mathrm {out} }} may be bounded between −1 and +1 if those are the maximum output limits. Qualifications: It is preferable to express K p {\displaystyle K_{p}} as a unitless number. To do this, we can express e ( t ) {\displaystyle e(t)} as a ratio with the span of the instrument. This span is in the same units as error (e.g. C degrees) so the ratio has no units. == Development of control block diagrams == Proportional control dictates g c = k c {\displaystyle {\mathit {g_{c}=k_{c}}}} . From the block diagram shown, assume that r, the setpoint, is the flowrate into a tank and e is error, which is the difference between setpoint and measured process output. g p , {\displaystyle {\mathit {g_{p}}},} is process transfer function; the input into the block is flow rate and output is tank level. The output as a function of the setpoint, r, is known as the closed-loop transfer function. g c l = g p g c 1 + g p g c , {\displaystyle {\mathit {g_{cl}}}={\frac {\mathit {g_{p}g_{c}}}{1+g_{p}g_{c}}},} If the poles of g c l , {\displaystyle {\mathit {g_{cl}}},} are stable, then the closed-loop system is stable. === First-order process === For a first-order process, a general transfer function is g p = k p τ p s + 1 {\displaystyle g_{p}={\frac {k_{p}}{\tau _{p}s+1}}} . Combining this with the closed-loop transfer function above returns g C L = k p k c τ p s + 1 1 + k p k c τ p s + 1 {\displaystyle g_{CL}={\frac {\frac {k_{p}k_{c}}{\tau _{p}s+1}}{1+{\frac {k_{p}k_{c}}{\tau _{p}s+1}}}}} . Simplifying this equation results in g C L = k C L τ C L s + 1 {\displaystyle g_{CL}={\frac {k_{CL}}{\tau _{CL}s+1}}} where k C L = k p k c 1 + k p k c {\displaystyle k_{CL}={\frac {k_{p}k_{c}}{1+k_{p}k_{c}}}} and τ C L = τ p 1 + k p k c {\displaystyle \tau _{CL}={\frac {\tau _{p}}{1+k_{p}k_{c}}}} . For stability in this system, τ C L > 0 {\displaystyle \tau _{CL}>0} ; therefore, τ p {\displaystyle \tau _{p}} must be a positive number, and k p k c > − 1 {\displaystyle k_{p}k_{c}>-1} (standard practice is to make sure that k p k c > 0 {\displaystyle k_{p}k_{c}>0} ). Introducing a step change to the system gives the output response of y ( s ) = g C L × Δ R s {\displaystyle y(s)=g_{CL}\times {\frac {\Delta R}{s}}} . Using the final-value theorem, lim t → ∞ y ( t ) = lim s ↘ 0 ( s × k C L τ C L s + 1 × Δ R s ) = k C L × Δ R = y ( t ) | t = ∞ {\displaystyle \lim _{t\to \infty }y(t)=\lim _{s\,\searrow \,0}\left(s\times {\frac {k_{CL}}{\tau _{CL}s+1}}\times {\frac {\Delta R}{s}}\right)=k_{CL}\times \Delta R=y(t)|_{t=\infty }} which shows that there will always be an offset in the system. === Integrating process === For an integrating process, a general transfer function is g p = 1 s ( s + 1 ) {\displaystyle g_{p}={\frac {1}{s(s+1)}}} , which, when combined with the closed-loop transfer function, becomes g C L = k c s ( s + 1 ) + k c {\displaystyle g_{CL}={\frac {k_{c}}{s(s+1)+k_{c}}}} . Introducing a step change to the system gives the output response of y ( s ) = g C L × Δ R s {\displaystyle y(s)=g_{CL}\times {\frac {\Delta R}{s}}} . Using the final-value theorem, lim t → ∞ y ( t ) = lim s ↘ 0 ( s × k c s ( s + 1 ) + k c × Δ R s ) = Δ R = y ( t ) | t = ∞ {\displaystyle \lim _{t\to \infty }y(t)=\lim _{s\,\searrow \,0}\left(s\times {\frac {k_{c}}{s(s+1)+k_{c}}}\times {\frac {\Delta R}{s}}\right)=\Delta R=y(t)|_{t=\infty }} meaning there is no offset in this system. This is the only process that will not have any offset when using a proportional controller. == Offset error == Offset error is the difference between the desired value and the actual value, SP − PV error. Over a range of operating conditions, proportional control alone is unable to eliminate offset error, as it requires an error to generate an output adjustment. While a proportional controller may be tuned (via p0 adjustment, if possible) to eliminate offset error for expected conditions, when a disturbance (deviation from existing state or setpoint adjustment) occurs in the process, corrective control action, based purely on proportional control, will result in an offset error. Consider an object suspended by a spring as a simple proportional control. The spring will attempt to maintain the object in a certain location despite disturbances that may temporarily displace it. Hooke's law tells us that the spring applies a corrective force that is proportional to the object's displacement. While this will tend to hold the object in a particular location, the absolute resting location of the object will vary if its mass is changed. This difference in resting location is the offset error. == Proportional band == The proportional band is the band of controller output over which the final control element (a control valve, for instance) will move from one extreme to another. Mathematically, it can be expressed as: P B = 100 K p {\displaystyle PB={\frac {100}{K_{p}}}\ } So if K p {\displaystyle K_{p}} , the proportional gain, is very high, the proportional band is very small, which means that the band of controller output over which the final control element will go from minimum to maximum (or vice versa) is very small. This is the case with on–off controllers, where K p {\displaystyle K_{p}} is very high and hence, for even a small error, the controller output is driven from one extreme to another. == Advantages == The clear advantage of proportional over on–off control can be demonstrated by car speed control. An analogy to on–off control is driving a car by applying either full power or no power and varying the duty cycle, to control speed. The power would be on until the target speed is reached, and then the power would be removed, so the car reduces speed. When the speed falls below the target, with a certain hysteresis, full power would again be applied. It can be seen that this would obviously result in poor control and large variations in speed. The more powerful the engine, the greater the instability; the heavier the car, the greater the stability. Stability may be expressed as correlating to the power-to-weight ratio of the vehicle. In proportional control, the power output is always proportional to the (actual versus target speed) error. If the car is at target speed and the speed increases slightly due to a falling gradient, the power is reduced slightly, or in proportion to the change in error, so that the car reduces speed gradually and reaches the new target point with very little, if any, "overshoot", which is much smoother control than on–off control. In practice, PID controllers are used for this and the large number of other control processes that require more responsive control than using proportional alone. == References == == External links == Proportional control compared to on–off or bang–bang control
Wikipedia/Proportional_control
In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field. == Riemann's explicit formula == In his 1859 paper "On the Number of Primes Less Than a Given Magnitude" Riemann sketched an explicit formula (it was not fully proven until 1895 by von Mangoldt, see below) for the normalized prime-counting function π0(x) which is related to the prime-counting function π(x) by π 0 ( x ) = 1 2 lim h → 0 [ π ( x + h ) + π ( x − h ) ] , {\displaystyle \pi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}\left[\,\pi (x+h)+\pi (x-h)\,\right]\,,} which takes the arithmetic mean of the limit from the left and the limit from the right at discontinuities. His formula was given in terms of the related function f ( x ) = π 0 ( x ) + 1 2 π 0 ( x 1 / 2 ) + 1 3 π 0 ( x 1 / 3 ) + ⋯ {\displaystyle f(x)=\pi _{0}(x)+{\frac {1}{2}}\,\pi _{0}(x^{1/2})+{\frac {1}{3}}\,\pi _{0}(x^{1/3})+\cdots } in which a prime power pn counts as 1⁄n of a prime. The normalized prime-counting function can be recovered from this function by π 0 ( x ) = ∑ n 1 n μ ( n ) f ( x 1 / n ) = f ( x ) − 1 2 f ( x 1 / 2 ) − 1 3 f ( x 1 / 3 ) − 1 5 f ( x 1 / 5 ) + 1 6 f ( x 1 / 6 ) − ⋯ , {\displaystyle \pi _{0}(x)=\sum _{n}{\frac {1}{n}}\,\mu (n)\,f(x^{1/n})=f(x)-{\frac {1}{2}}\,f(x^{1/2})-{\frac {1}{3}}\,f(x^{1/3})-{\frac {1}{5}}\,f(x^{1/5})+{\frac {1}{6}}\,f(x^{1/6})-\cdots ,} where μ(n) is the Möbius function. Riemann's formula is then f ( x ) = li ⁡ ( x ) − ∑ ρ li ⁡ ( x ρ ) − log ⁡ ( 2 ) + ∫ x ∞ d t t ( t 2 − 1 ) log ⁡ ( t ) {\displaystyle f(x)=\operatorname {li} (x)-\sum _{\rho }\operatorname {li} (x^{\rho })-\log(2)+\int _{x}^{\infty }{\frac {dt}{~t\,(t^{2}-1)~\log(t)~}}} involving a sum over the non-trivial zeros ρ of the Riemann zeta function. The sum is not absolutely convergent, but may be evaluated by taking the zeros in order of the absolute value of their imaginary part. The function li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral li ⁡ ( x ) = ∫ 0 x d t log ⁡ ( t ) . {\displaystyle \operatorname {li} (x)=\int _{0}^{x}{\frac {dt}{\,\log(t)\,}}\,.} The terms li(xρ) involving the zeros of the zeta function need some care in their definition as li has branch points at 0 and 1, and are defined by analytic continuation in the complex variable ρ in the region x > 1 and Re(ρ) > 0. The other terms also correspond to zeros: The dominant term li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. (For graphs of the sums of the first few terms of this series see Zagier 1977.) The first rigorous proof of the aforementioned formula was given by von Mangoldt in 1895: it started with a proof of the following formula for the Chebyshev's function ψ ψ 0 ( x ) = 1 2 π i ∫ σ − i ∞ σ + i ∞ ( − ζ ′ ( s ) ζ ( s ) ) x s s d s = x − ∑ ρ x ρ ρ − log ⁡ ( 2 π ) − 1 2 log ⁡ ( 1 − x − 2 ) {\displaystyle \psi _{0}(x)={\dfrac {1}{2\pi i}}\int _{\sigma -i\infty }^{\sigma +i\infty }\left(-{\dfrac {\zeta '(s)}{\zeta (s)}}\right){\dfrac {x^{s}}{s}}\,ds=x-\sum _{\rho }{\frac {~x^{\rho }\,}{\rho }}-\log(2\pi )-{\dfrac {1}{2}}\log(1-x^{-2})} where the LHS is an inverse Mellin transform with σ > 1 , ψ ( x ) = ∑ p k ≤ x log ⁡ p , and ψ 0 ( x ) = 1 2 lim h → 0 ( ψ ( x + h ) + ψ ( x − h ) ) {\displaystyle \sigma >1\,,\quad \psi (x)=\sum _{p^{k}\leq x}\log p\,,\quad {\text{and}}\quad \psi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}(\psi (x+h)+\psi (x-h))} and the RHS is obtained from the residue theorem, and then converting it into the formula that Riemann himself actually sketched. This series is also conditionally convergent and the sum over zeroes should again be taken in increasing order of imaginary part: ∑ ρ x ρ ρ = lim T → ∞ S ( x , T ) {\displaystyle \sum _{\rho }{\frac {x^{\rho }}{\rho }}=\lim _{T\to \infty }S(x,T)} where S ( x , T ) = ∑ ρ : | ℑ ρ | ≤ T x ρ ρ . {\displaystyle S(x,T)=\sum _{\rho :\left|\Im \rho \right|\leq T}{\frac {x^{\rho }}{\rho }}\,.} The error involved in truncating the sum to S(x,T) is always smaller than ln(x) in absolute value, and when divided by the natural logarithm of x, has absolute value smaller than x⁄T divided by the distance from x to the nearest prime power. == Weil's explicit formula == There are several slightly different ways to state the explicit formula. André Weil's form of the explicit formula states Φ ( 1 ) + Φ ( 0 ) − ∑ ρ Φ ( ρ ) = ∑ p , m log ⁡ ( p ) p m / 2 ( F ( log ⁡ ( p m ) ) + F ( − log ⁡ ( p m ) ) ) − 1 2 π ∫ − ∞ ∞ φ ( t ) Ψ ( t ) d t {\displaystyle {\begin{aligned}&\Phi (1)+\Phi (0)-\sum _{\rho }\Phi (\rho )\\&=\sum _{p,m}{\frac {\log(p)}{p^{m/2}}}{\Big (}F(\log(p^{m}))+F(-\log(p^{m})){\Big )}-{\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi (t)\Psi (t)\,dt\end{aligned}}} where ρ runs over the non-trivial zeros of the zeta function p runs over positive primes m runs over positive integers F is a smooth function all of whose derivatives are rapidly decreasing φ {\displaystyle \varphi } is a Fourier transform of F: φ ( t ) = ∫ − ∞ ∞ F ( x ) e i t x d x {\displaystyle \varphi (t)=\int _{-\infty }^{\infty }F(x)e^{itx}\,dx} Φ ( 1 / 2 + i t ) = φ ( t ) {\displaystyle \Phi (1/2+it)=\varphi (t)} Ψ ( t ) = − log ⁡ ( π ) + Re ⁡ ( ψ ( 1 / 4 + i t / 2 ) ) {\displaystyle \Psi (t)=-\log(\pi )+\operatorname {Re} (\psi (1/4+it/2))} , where ψ {\displaystyle \psi } is the digamma function Γ′/Γ. Roughly speaking, the explicit formula says the Fourier transform of the zeros of the zeta function is the set of prime powers plus some elementary factors. Once this is said, the formula comes from the fact that the Fourier transform is a unitary operator, so that a scalar product in time domain is equal to the scalar product of the Fourier transforms in the frequency domain. The terms in the formula arise in the following way. The terms on the right hand side come from the logarithmic derivative of ζ ∗ ( s ) = Γ ( s / 2 ) π − s / 2 ∏ p 1 1 − p − s {\displaystyle \zeta ^{*}(s)=\Gamma (s/2)\pi ^{-s/2}\prod _{p}{\frac {1}{1-p^{-s}}}} with the terms corresponding to the prime p coming from the Euler factor of p, and the term at the end involving Ψ coming from the gamma factor (the Euler factor at infinity). The left-hand side is a sum over all zeros of ζ * counted with multiplicities, so the poles at 0 and 1 are counted as zeros of order −1. Weil's explicit formula can be understood like this. The target is to be able to write that: d d u [ ∑ n ≤ e | u | Λ ( n ) + 1 2 ln ⁡ ( 1 − e − 2 | u | ) ] = ∑ n = 1 ∞ Λ ( n ) [ δ ( u + ln ⁡ n ) + δ ( u − ln ⁡ n ) ] + 1 2 d ln ⁡ ( 1 − e − 2 | u | ) d u = e u − ∑ ρ e ρ u , {\displaystyle {\frac {d}{du}}\left[\sum _{n\leq e^{|u|}}\Lambda (n)+{\frac {1}{2}}\ln(1-e^{-2|u|})\right]=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right]+{\frac {1}{2}}{\frac {d\ln(1-e^{-2|u|})}{du}}=e^{u}-\sum _{\rho }e^{\rho u},} where Λ is the von Mangoldt function. So that the Fourier transform of the non trivial zeros is equal to the primes power symmetrized plus a minor term. Of course, the sum involved are not convergent, but the trick is to use the unitary property of Fourier transform which is that it preserves scalar product: ∫ − ∞ ∞ f ( u ) g ∗ ( u ) d u = ∫ − ∞ ∞ F ( t ) G ∗ ( t ) d t {\displaystyle \int _{-\infty }^{\infty }f(u)g^{*}(u)\,du=\int _{-\infty }^{\infty }F(t)G^{*}(t)\,dt} where F , G {\displaystyle F,G} are the Fourier transforms of f , g {\displaystyle f,g} . At a first look, it seems to be a formula for functions only, but in fact in many cases it also works when g {\displaystyle g} is a distribution. Hence, by setting g ( u ) = ∑ n = 1 ∞ Λ ( n ) [ δ ( u + ln ⁡ n ) + δ ( u − ln ⁡ n ) ] , {\displaystyle g(u)=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right],} where δ ( u ) {\displaystyle \delta (u)} is the Dirac delta, and carefully choosing a function f {\displaystyle f} and its Fourier transform, we get the formula above. == Generalizations == The Riemann zeta function can be replaced by a Dirichlet L-function of a Dirichlet character χ. The sum over prime powers then gets extra factors of χ(p m), and the terms Φ(1) and Φ(0) disappear because the L-series has no poles. More generally, the Riemann zeta function and the L-series can be replaced by the Dedekind zeta function of an algebraic number field or a Hecke L-series. The sum over primes then gets replaced by a sum over prime ideals. == Applications == Riemann's original use of the explicit formula was to give an exact formula for the number of primes less than a given number. To do this, take F(log(y)) to be y1/2/log(y) for 0 ≤ y ≤ x and 0 elsewhere. Then the main term of the sum on the right is the number of primes less than x. The main term on the left is Φ(1); which turns out to be the dominant terms of the prime number theorem, and the main correction is the sum over non-trivial zeros of the zeta function. (There is a minor technical problem in using this case, in that the function F does not satisfy the smoothness condition.) == Hilbert–Pólya conjecture == According to the Hilbert–Pólya conjecture, the complex zeroes ρ should be the eigenvalues of some linear operator T. The sum over the zeros of the explicit formula is then (at least formally) given by a trace: ∑ ρ F ( ρ ) = Tr ⁡ ( F ( T ^ ) ) . {\displaystyle \sum _{\rho }F(\rho )=\operatorname {Tr} (F({\widehat {T}})).\!} Development of the explicit formulae for a wide class of L-functions was given by Weil (1952), who first extended the idea to local zeta-functions, and formulated a version of a generalized Riemann hypothesis in this setting, as a positivity statement for a generalized function on a topological group. More recent work by Alain Connes has gone much further into the functional-analytic background, providing a trace formula the validity of which is equivalent to such a generalized Riemann hypothesis. A slightly different point of view was given by Meyer (2005), who derived the explicit formula of Weil via harmonic analysis on adelic spaces. == See also == Selberg trace formula Selberg zeta function == Footnotes == == References == Ingham, A.E. (1990) [1932], The Distribution of Prime Numbers, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 30, reissued with a foreword by R. C. Vaughan (2nd ed.), Cambridge University Press, ISBN 978-0-521-39789-6, MR 1074573, Zbl 0715.11045 Lang, Serge (1994), Algebraic number theory, Graduate Texts in Mathematics, vol. 110 (2nd ed.), New York, NY: Springer-Verlag, ISBN 0-387-94225-4, Zbl 0811.11001 Riemann, Bernhard (1859), "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse", Monatsberichte der Berliner Akademie Weil, André (1952), "Sur les "formules explicites" de la théorie des nombres premiers" [On "explicit formulas" in the theory of prime numbers], Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French), Tome Supplémentaire: 252–265, MR 0053152, Zbl 0049.03205 von Mangoldt, Hans (1895), "Zu Riemanns Abhandlung "Über die Anzahl der Primzahlen unter einer gegebenen Grösse"" [On Riemann's paper "The number of prime numbers less than a given magnitude"], Journal für die reine und angewandte Mathematik (in German), 114: 255–305, ISSN 0075-4102, JFM 26.0215.03, MR 1580379 Meyer, Ralf (2005), "On a representation of the idele class group related to primes and zeros of L-functions", Duke Math. J., 127 (3): 519–595, arXiv:math/0311468, doi:10.1215/s0012-7094-04-12734-4, ISSN 0012-7094, MR 2132868, S2CID 119176169, Zbl 1079.11044 Zagier, Don (1977), "The first 50 million prime numbers", The Mathematical Intelligencer, 1 (S2): 7–19, doi:10.1007/bf03351556, S2CID 37866599 == Further reading == Edwards, H.M. (1974), Riemann's zeta function, Pure and Applied Mathematics, vol. 58, New York-London: Academic Press, ISBN 0-12-232750-0, Zbl 0315.10035 Riesel, Hans (1994), Prime numbers and computer methods for factorization, Progress in Mathematics, vol. 126 (2nd ed.), Boston, MA: Birkhäuser, ISBN 0-8176-3743-5, Zbl 0821.11001
Wikipedia/Explicit_formula_of_an_L-function
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT. == Definition == The Fourier transform of a complex-valued (Lebesgue) integrable function f ( x ) {\displaystyle f(x)} on the real line, is the complex valued function f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} , defined by the integral Evaluating the Fourier transform for all values of ξ {\displaystyle \xi } produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If f ( x ) {\displaystyle f(x)} decays with all derivatives, i.e., lim | x | → ∞ f ( n ) ( x ) = 0 , ∀ n ∈ N , {\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,} then f ^ {\displaystyle {\widehat {f}}} converges for all frequencies and, by the Riemann–Lebesgue lemma, f ^ {\displaystyle {\widehat {f}}} also decays with all derivatives. First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e., The functions f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} are referred to as a Fourier transform pair. A common notation for designating transform pairs is: f ( x ) ⟷ F f ^ ( ξ ) , {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),} for example rect ⁡ ( x ) ⟷ F sinc ⁡ ( ξ ) . {\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, the Fourier series can be regarded as an abstract Fourier transform on the group Z {\displaystyle \mathbb {Z} } of integers. That is, the synthesis of a sequence of complex numbers c n {\displaystyle c_{n}} is defined by the Fourier transform f ( x ) = ∑ n = − ∞ ∞ c n e i 2 π n P x , {\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},} such that c n {\displaystyle c_{n}} are given by the inversion formula, i.e., the analysis c n = 1 P ∫ − P / 2 P / 2 f ( x ) e − i 2 π n P x d x , {\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,} for some complex-valued, P {\displaystyle P} -periodic function f ( x ) {\displaystyle f(x)} defined on a bounded interval [ − P / 2 , P / 2 ] ∈ R {\displaystyle [-P/2,P/2]\in \mathbb {R} } . When P → ∞ , {\displaystyle P\to \infty ,} the constituent frequencies are a continuum: n P → ξ ∈ R , {\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,} and c n → f ^ ( ξ ) ∈ C {\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} } . In other words, on the finite interval [ − P / 2 , P / 2 ] {\displaystyle [-P/2,P/2]} the function f ( x ) {\displaystyle f(x)} has a discrete decomposition in the periodic functions e i 2 π x n / P {\displaystyle e^{i2\pi xn/P}} . On the infinite interval ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} the function f ( x ) {\displaystyle f(x)} has a continuous decomposition in periodic functions e i 2 π x ξ {\displaystyle e^{i2\pi x\xi }} . === Lebesgue integrable functions === A measurable function f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: ‖ f ‖ 1 = ∫ R | f ( x ) | d x < ∞ . {\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .} If f {\displaystyle f} is Lebesgue integrable then the Fourier transform, given by Eq.1, is well-defined for all ξ ∈ R {\displaystyle \xi \in \mathbb {R} } . Furthermore, f ^ ∈ L ∞ ∩ C ( R ) {\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )} is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity. The space L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is the space of measurable functions for which the norm ‖ f ‖ 1 {\displaystyle \|f\|_{1}} is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform on L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is one-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that f ( x ) {\displaystyle f(x)} decayed with all derivatives. While Eq.1 defines the Fourier transform for (complex-valued) functions in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , it is not well-defined for other integrability classes, most importantly the space of square-integrable functions L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . For example, the function f ( x ) = ( 1 + x 2 ) − 1 / 2 {\displaystyle f(x)=(1+x^{2})^{-1/2}} is in L 2 {\displaystyle L^{2}} but not L 1 {\displaystyle L^{1}} and therefore the Lebesgue integral Eq.1 does not exist. However, the Fourier transform on the dense subspace L 1 ∩ L 2 ( R ) ⊂ L 2 ( R ) {\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )} admits a unique continuous extension to a unitary operator on L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . This extension is important in part because, unlike the case of L 1 {\displaystyle L^{1}} , the Fourier transform is an automorphism of the space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the L 2 {\displaystyle L^{2}} Fourier transform is that Gaussians are dense in L 1 ∩ L 2 {\displaystyle L^{1}\cap L^{2}} , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians: that e − π x 2 {\displaystyle e^{-\pi x^{2}}} is its own Fourier transform; and that the Gaussian integral ∫ − ∞ ∞ e − π x 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }e^{-\pi x^{2}}\,dx=1.} A feature of the L 1 {\displaystyle L^{1}} Fourier transform is that it is a homomorphism of Banach algebras from L 1 {\displaystyle L^{1}} equipped with the convolution operation to the Banach algebra of continuous functions under the L ∞ {\displaystyle L^{\infty }} (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure. === Angular frequency (ω) === When the independent variable ( x {\displaystyle x} ) represents time (often denoted by t {\displaystyle t} ), the transform variable ( ξ {\displaystyle \xi } ) represents frequency (often denoted by f {\displaystyle f} ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, ω = 2 π ξ , {\displaystyle \omega =2\pi \xi ,} whose units are radians per second. The substitution ξ = ω 2 π {\displaystyle \xi ={\tfrac {\omega }{2\pi }}} into Eq.1 produces this convention, where function f ^ {\displaystyle {\widehat {f}}} is relabeled f 1 ^ : {\displaystyle {\widehat {f_{1}}}:} f 3 ^ ( ω ) ≜ ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 3 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the 2 π {\displaystyle 2\pi } factor evenly between the transform and its inverse, which leads to another convention: f 2 ^ ( ω ) ≜ 1 2 π ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = 1 2 π f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 2 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. == Background == === History === In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. === Complex sinusoids === In general, the coefficients f ^ ( ξ ) {\displaystyle {\widehat {f}}(\xi )} are complex numbers, which have two equivalent forms (see Euler's formula): f ^ ( ξ ) = A e i θ ⏟ polar coordinate form = A cos ⁡ ( θ ) + i A sin ⁡ ( θ ) ⏟ rectangular coordinate form . {\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product with e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (Eq.2) has these forms: f ^ ( ξ ) ⋅ e i 2 π ξ x = A e i θ ⋅ e i 2 π ξ x = A e i ( 2 π ξ x + θ ) ⏟ polar coordinate form = A cos ⁡ ( 2 π ξ x + θ ) + i A sin ⁡ ( 2 π ξ x + θ ) ⏟ rectangular coordinate form . {\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}} which conveys both amplitude and phase of frequency ξ . {\displaystyle \xi .} Likewise, the intuitive interpretation of Eq.1 is that multiplying f ( x ) {\displaystyle f(x)} by e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} has the effect of subtracting ξ {\displaystyle \xi } from every frequency component of function f ( x ) . {\displaystyle f(x).} Only the component that was at frequency ξ {\displaystyle \xi } can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. === Negative frequency === Euler's formula introduces the possibility of negative ξ . {\displaystyle \xi .} And Eq.1 is defined ∀ ξ ∈ R . {\displaystyle \forall \xi \in \mathbb {R} .} Only certain complex-valued f ( x ) {\displaystyle f(x)} have transforms f ^ = 0 , ∀ ξ < 0 {\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0} (See Analytic signal. A simple example is e i 2 π ξ 0 x ( ξ 0 > 0 ) . {\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).} ) But negative frequency is necessary to characterize all other complex-valued f ( x ) , {\displaystyle f(x),} found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others. For a real-valued f ( x ) , {\displaystyle f(x),} Eq.1 has the symmetry property f ^ ( − ξ ) = f ^ ∗ ( ξ ) {\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )} (see § Conjugation below). This redundancy enables Eq.2 to distinguish f ( x ) = cos ⁡ ( 2 π ξ 0 x ) {\displaystyle f(x)=\cos(2\pi \xi _{0}x)} from e i 2 π ξ 0 x . {\displaystyle e^{i2\pi \xi _{0}x}.} But of course it cannot tell us the actual sign of ξ 0 , {\displaystyle \xi _{0},} because cos ⁡ ( 2 π ξ 0 x ) {\displaystyle \cos(2\pi \xi _{0}x)} and cos ⁡ ( 2 π ( − ξ 0 ) x ) {\displaystyle \cos(2\pi (-\xi _{0})x)} are indistinguishable on just the real numbers line. === Fourier transform for periodic functions === The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions. This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If f ( x ) {\displaystyle f(x)} is a periodic function, with period P {\displaystyle P} , that has a convergent Fourier series, then: f ^ ( ξ ) = ∑ n = − ∞ ∞ c n ⋅ δ ( ξ − n P ) , {\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),} where c n {\displaystyle c_{n}} are the Fourier series coefficients of f {\displaystyle f} , and δ {\displaystyle \delta } is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients. === Sampling the Fourier transform === The Fourier transform of an integrable function f {\displaystyle f} can be sampled at regular intervals of arbitrary length 1 P . {\displaystyle {\tfrac {1}{P}}.} These samples can be deduced from one cycle of a periodic function f P {\displaystyle f_{P}} which has Fourier series coefficients proportional to those samples by the Poisson summation formula: f P ( x ) ≜ ∑ n = − ∞ ∞ f ( x + n P ) = 1 P ∑ k = − ∞ ∞ f ^ ( k P ) e i 2 π k P x , ∀ k ∈ Z {\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability of f {\displaystyle f} ensures the periodic summation converges. Therefore, the samples f ^ ( k P ) {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)} can be determined by Fourier series analysis: f ^ ( k P ) = ∫ P f P ( x ) ⋅ e − i 2 π k P x d x . {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} When f ( x ) {\displaystyle f(x)} has compact support, f P ( x ) {\displaystyle f_{P}(x)} has a finite number of terms within the interval of integration. When f ( x ) {\displaystyle f(x)} does not have compact support, numerical evaluation of f P ( x ) {\displaystyle f_{P}(x)} requires an approximation, such as tapering f ( x ) {\displaystyle f(x)} or truncating the number of terms. == Units == The frequency variable must have inverse units to the units of the original function's domain (typically named t {\displaystyle t} or x {\displaystyle x} ). For example, if t {\displaystyle t} is measured in seconds, ξ {\displaystyle \xi } should be in cycles per second or hertz. If the scale of time is in units of 2 π {\displaystyle 2\pi } seconds, then another Greek letter ω {\displaystyle \omega } is typically used instead to represent angular frequency (where ω = 2 π ξ {\displaystyle \omega =2\pi \xi } ) in units of radians per second. If using x {\displaystyle x} for units of length, then ξ {\displaystyle \xi } must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of t {\displaystyle t} and measured in units of t , {\displaystyle t,} and the other which is the range of ξ {\displaystyle \xi } and measured in inverse units to the units of t . {\displaystyle t.} These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general, ξ {\displaystyle \xi } must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} is the amplitude of the wave e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} instead of the wave e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (the former, with its minus sign, is often seen in the time dependence for sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current. When using dimensionless units, the constant factors might not be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either: ϕ ( λ ) = ∫ − ∞ ∞ f ( x ) e i λ x d x . {\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} In probability theory and mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms". From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group. == Properties == Let f ( x ) {\displaystyle f(x)} and h ( x ) {\displaystyle h(x)} represent integrable functions Lebesgue-measurable on the real line satisfying: ∫ − ∞ ∞ | f ( x ) | d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .} We denote the Fourier transforms of these functions as f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} and h ^ ( ξ ) {\displaystyle {\hat {h}}(\xi )} respectively. === Basic properties === The Fourier transform has the following basic properties: ==== Linearity ==== a f ( x ) + b h ( x ) ⟺ F a f ^ ( ξ ) + b h ^ ( ξ ) ; a , b ∈ C {\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } ==== Time shifting ==== f ( x − x 0 ) ⟺ F e − i 2 π x 0 ξ f ^ ( ξ ) ; x 0 ∈ R {\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ==== Frequency shifting ==== e i 2 π ξ 0 x f ( x ) ⟺ F f ^ ( ξ − ξ 0 ) ; ξ 0 ∈ R {\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } ==== Time scaling ==== f ( a x ) ⟺ F 1 | a | f ^ ( ξ a ) ; a ≠ 0 {\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0} The case a = − 1 {\displaystyle a=-1} leads to the time-reversal property: f ( − x ) ⟺ F f ^ ( − ξ ) {\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} ==== Symmetry ==== When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: T i m e d o m a i n f = f RE + f RO + i f IE + i f IO ⏟ ⇕ F ⇕ F ⇕ F ⇕ F ⇕ F F r e q u e n c y d o m a i n f ^ = f ^ RE + i f ^ IO ⏞ + i f ^ IE + f ^ RO {\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: The transform of a real-valued function ( f R E + f R O ) {\displaystyle (f_{_{RE}}+f_{_{RO}})} is the conjugate symmetric function f ^ R E + i f ^ I O . {\displaystyle {\hat {f}}_{RE}+i\ {\hat {f}}_{IO}.} Conversely, a conjugate symmetric transform implies a real-valued time-domain. The transform of an imaginary-valued function ( i f I E + i f I O ) {\displaystyle (i\ f_{_{IE}}+i\ f_{_{IO}})} is the conjugate antisymmetric function f ^ R O + i f ^ I E , {\displaystyle {\hat {f}}_{RO}+i\ {\hat {f}}_{IE},} and the converse is true. The transform of a conjugate symmetric function ( f R E + i f I O ) {\displaystyle (f_{_{RE}}+i\ f_{_{IO}})} is the real-valued function f ^ R E + f ^ R O , {\displaystyle {\hat {f}}_{RE}+{\hat {f}}_{RO},} and the converse is true. The transform of a conjugate antisymmetric function ( f R O + i f I E ) {\displaystyle (f_{_{RO}}+i\ f_{_{IE}})} is the imaginary-valued function i f ^ I E + i f ^ I O , {\displaystyle i\ {\hat {f}}_{IE}+i{\hat {f}}_{IO},} and the converse is true. ==== Conjugation ==== ( f ( x ) ) ∗ ⟺ F ( f ^ ( − ξ ) ) ∗ {\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}} (Note: the ∗ denotes complex conjugation.) In particular, if f {\displaystyle f} is real, then f ^ {\displaystyle {\widehat {f}}} is even symmetric (aka Hermitian function): f ^ ( − ξ ) = ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And if f {\displaystyle f} is purely imaginary, then f ^ {\displaystyle {\widehat {f}}} is odd symmetric: f ^ ( − ξ ) = − ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} ==== Real and imaginary parts ==== Re ⁡ { f ( x ) } ⟺ F 1 2 ( f ^ ( ξ ) + ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Im ⁡ { f ( x ) } ⟺ F 1 2 i ( f ^ ( ξ ) − ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} ==== Zero frequency component ==== Substituting ξ = 0 {\displaystyle \xi =0} in the definition, we obtain: f ^ ( 0 ) = ∫ − ∞ ∞ f ( x ) d x . {\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral of f {\displaystyle f} over its domain is known as the average value or DC bias of the function. === Uniform continuity and the Riemann–Lebesgue lemma === The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform f ^ {\displaystyle {\hat {f}}} of any integrable function f {\displaystyle f} is uniformly continuous and ‖ f ^ ‖ ∞ ≤ ‖ f ‖ 1 {\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By the Riemann–Lebesgue lemma, f ^ ( ξ ) → 0 as | ξ | → ∞ . {\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However, f ^ {\displaystyle {\hat {f}}} need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f {\displaystyle f} and f ^ {\displaystyle {\hat {f}}} are integrable, the inverse equality f ( x ) = ∫ − ∞ ∞ f ^ ( ξ ) e i 2 π x ξ d ξ {\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi } holds for almost every x. As a result, the Fourier transform is injective on L1(R). === Plancherel theorem and Parseval's theorem === Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows: ⟨ f , g ⟩ L 2 = ∫ − ∞ ∞ f ( x ) g ( x ) ¯ d x = ∫ − ∞ ∞ f ^ ( ξ ) g ^ ( ξ ) ¯ d ξ , {\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,} where the bar denotes complex conjugation. The Plancherel theorem, which follows from the above, states that ‖ f ‖ L 2 2 = ∫ − ∞ ∞ | f ( x ) | 2 d x = ∫ − ∞ ∞ | f ^ ( ξ ) | 2 d ξ . {\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups. === Convolution theorem === The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if: h ( x ) = ( f ∗ g ) ( x ) = ∫ − ∞ ∞ f ( y ) g ( x − y ) d y , {\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,} where ∗ denotes the convolution operation, then: h ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system. Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ). === Cross-correlation theorem === In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x): h ( x ) = ( f ⋆ g ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ g ( x + y ) d y {\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy} then the Fourier transform of h(x) is: h ^ ( ξ ) = f ^ ( ξ ) ¯ g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, the autocorrelation of function f(x) is: h ( x ) = ( f ⋆ f ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ f ( x + y ) d y {\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy} for which h ^ ( ξ ) = f ^ ( ξ ) ¯ f ^ ( ξ ) = | f ^ ( ξ ) | 2 . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} === Differentiation === Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by f ′ ^ ( ξ ) = F { d d x f ( x ) } = i 2 π ξ f ^ ( ξ ) . {\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).} More generally, the Fourier transformation of the nth derivative f(n) is given by f ( n ) ^ ( ξ ) = F { d n d x n f ( x ) } = ( i 2 π ξ ) n f ^ ( ξ ) . {\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously, F { d n d ξ n f ^ ( ξ ) } = ( i 2 π x ) n f ( x ) {\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)} , so F { x n f ( x ) } = ( i 2 π ) n d n d ξ n f ^ ( ξ ) . {\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth." === Eigenfunctions === The Fourier transform is a linear transform which has eigenfunctions obeying F [ ψ ] = λ ψ , {\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,} with λ ∈ C . {\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation [ U ( 1 2 π d d x ) + U ( x ) ] ψ ( x ) = 0 {\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0} leads to eigenfunctions ψ ( x ) {\displaystyle \psi (x)} of the Fourier transform F {\displaystyle {\mathcal {F}}} as long as the form of the equation remains invariant under Fourier transform. In other words, every solution ψ ( x ) {\displaystyle \psi (x)} and its Fourier transform ψ ^ ( ξ ) {\displaystyle {\hat {\psi }}(\xi )} obey the same equation. Assuming uniqueness of the solutions, every solution ψ ( x ) {\displaystyle \psi (x)} must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if U ( x ) {\displaystyle U(x)} can be expanded in a power series in which for all terms the same factor of either one of ± 1 , ± i {\displaystyle \pm 1,\pm i} arises from the factors i n {\displaystyle i^{n}} introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable U ( x ) = x {\displaystyle U(x)=x} leads to the standard normal distribution. More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation [ W ( i 2 π d d x ) + W ( x ) ] ψ ( x ) = C ψ ( x ) {\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)} with C {\displaystyle C} constant and W ( x ) {\displaystyle W(x)} being a non-constant even function remains invariant in form when applying the Fourier transform F {\displaystyle {\mathcal {F}}} to both sides of the equation. The simplest example is provided by W ( x ) = x 2 {\displaystyle W(x)=x^{2}} which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use ψ n ( x ) = 2 4 n ! e − π x 2 H e n ( 2 x π ) , {\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),} where Hen(x) are the "probabilist's" Hermite polynomials, defined as H e n ( x ) = ( − 1 ) n e 1 2 x 2 ( d d x ) n e − 1 2 x 2 . {\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have that ψ ^ n ( ξ ) = ( − i ) n ψ n ( ξ ) . {\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. Because of F 4 = i d {\displaystyle {\mathcal {F}}^{4}=\mathrm {id} } there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: F [ f ] ( ξ ) = ∫ d x f ( x ) ∑ n ≥ 0 ( − i ) n ψ n ( x ) ψ n ( ξ ) . {\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator N {\displaystyle N} via F [ ψ ] = e − i t N ψ . {\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operator N {\displaystyle N} is the number operator of the quantum harmonic oscillator written as N ≡ 1 2 ( x − ∂ ∂ x ) ( x + ∂ ∂ x ) = 1 2 ( − ∂ 2 ∂ x 2 + x 2 − 1 ) . {\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform F {\displaystyle {\mathcal {F}}} for the particular value t = π / 2 , {\displaystyle t=\pi /2,} with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of N {\displaystyle N} are the Hermite functions ψ n ( x ) {\displaystyle \psi _{n}(x)} which are therefore also eigenfunctions of F . {\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform. === Inversion and periodicity === Under suitable conditions on the function f {\displaystyle f} , it can be recovered from its Fourier transform f ^ {\displaystyle {\hat {f}}} . Indeed, denoting the Fourier transform operator by F {\displaystyle {\mathcal {F}}} , so F f := f ^ {\displaystyle {\mathcal {F}}f:={\hat {f}}} , then for suitable functions, applying the Fourier transform twice simply flips the function: ( F 2 f ) ( x ) = f ( − x ) {\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)} , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields F 4 ( f ) = f {\displaystyle {\mathcal {F}}^{4}(f)=f} , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: F 3 ( f ^ ) = f {\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f} . In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining the parity operator P {\displaystyle {\mathcal {P}}} such that ( P f ) ( x ) = f ( − x ) {\displaystyle ({\mathcal {P}}f)(x)=f(-x)} , we have: F 0 = i d , F 1 = F , F 2 = P , F 3 = F − 1 = P ∘ F = F ∘ P , F 4 = i d {\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}} These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis. === Connection with the Heisenberg group === The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is ( M ξ − 1 T y − 1 M ξ T y f ) ( x ) = e i 2 π ξ y f ( x ) {\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)} which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law ( x 1 , ξ 1 , t 1 ) ⋅ ( x 2 , ξ 2 , t 2 ) = ( x 1 + x 2 , ξ 1 + ξ 2 , t 1 t 2 e i 2 π ( x 1 ξ 1 + x 2 ξ 2 + x 1 ξ 2 ) ) . {\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by J ( x ξ ) = ( − ξ x ) {\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}} so that J2 = −I. This J can be extended to a unique automorphism of H1: j ( x , ξ , t ) = ( − ξ , x , t e − i 2 π ξ x ) . {\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that ρ ∘ j = W ρ W ∗ . {\displaystyle \rho \circ j=W\rho W^{*}.} This operator W is the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f. == Complex domain == The integral for the Fourier transform f ^ ( ξ ) = ∫ − ∞ ∞ e − i 2 π ξ t f ( t ) d t {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt} can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between. The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0, | ξ n f ^ ( ξ ) | ≤ C e a | τ | {\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }} for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ). (If f is not smooth, but only L2, the statement still holds provided n = 0.) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups. If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function. === Laplace transform === The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters. It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane. For example, if f(t) is of exponential growth, i.e., | f ( t ) | < C e a | t | {\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }} for some constants C, a ≥ 0, then f ^ ( i τ ) = ∫ − ∞ ∞ e 2 π τ t f ( t ) d t , {\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,} convergent for all 2πτ < −a, is the two-sided Laplace transform of f. The more usual version ("one-sided") of the Laplace transform is F ( s ) = ∫ 0 ∞ f ( t ) e − s t d t . {\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} If f is also causal, and analytical, then: f ^ ( i τ ) = F ( − 2 π τ ) . {\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).} Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis. === Inversion === Still with ξ = σ + i τ {\displaystyle \xi =\sigma +i\tau } , if f ^ {\displaystyle {\widehat {f}}} is complex analytic for a ≤ τ ≤ b, then ∫ − ∞ ∞ f ^ ( σ + i a ) e i 2 π ξ t d σ = ∫ − ∞ ∞ f ^ ( σ + i b ) e i 2 π ξ t d σ {\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma } by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis. Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then f ( t ) = ∫ − ∞ ∞ f ^ ( σ + i τ ) e i 2 π ξ t d σ , {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,} for any τ < −⁠a/2π⁠. This theorem implies the Mellin inversion formula for the Laplace transformation, f ( t ) = 1 i 2 π ∫ b − i ∞ b + i ∞ F ( s ) e s t d s {\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds} for any b > a, where F(s) is the Laplace transform of f(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values. L2 versions of these inversion formulas are also available. == Fourier transform on Euclidean space == The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition: f ^ ( ξ ) = F ( f ) ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} } where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space R n ⋆ {\displaystyle \mathbb {R} ^{n\star }} , in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩. All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds. === Uncertainty principle === Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized: ∫ − ∞ ∞ | f ( x ) | 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from the Plancherel theorem that f̂(ξ) is also normalized. The spread around x = 0 may be measured by the dispersion about zero defined by D 0 ( f ) = ∫ − ∞ ∞ x 2 | f ( x ) | 2 d x . {\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is the second moment of |f(x)|2 about zero. The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then D 0 ( f ) D 0 ( f ^ ) ≥ 1 16 π 2 . {\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the case f ( x ) = C 1 e − π x 2 σ 2 ∴ f ^ ( ξ ) = σ C 1 e − π σ 2 ξ 2 {\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}} where σ > 0 is arbitrary and C1 = ⁠4√2/√σ⁠ so that f is L2-normalized. In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π. Gaussian functions are examples of Schwartz functions (see the discussion on tempered distributions below). In fact, this inequality implies that: ( ∫ − ∞ ∞ ( x − x 0 ) 2 | f ( x ) | 2 d x ) ( ∫ − ∞ ∞ ( ξ − ξ 0 ) 2 | f ^ ( ξ ) | 2 d ξ ) ≥ 1 16 π 2 , ∀ x 0 , ξ 0 ∈ R . {\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .} In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle. A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: H ( | f | 2 ) + H ( | f ^ | 2 ) ≥ log ⁡ ( e 2 ) {\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)} where H(p) is the differential entropy of the probability density function p(x): H ( p ) = − ∫ − ∞ ∞ p ( x ) log ⁡ ( p ( x ) ) d x {\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx} where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. === Sine and cosine transforms === Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) λ by f ( t ) = ∫ 0 ∞ ( a ( λ ) cos ⁡ ( 2 π λ t ) + b ( λ ) sin ⁡ ( 2 π λ t ) ) d λ . {\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): a ( λ ) = 2 ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π λ t ) d t {\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt} and b ( λ ) = 2 ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π λ t ) d t . {\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b. The function f can be recovered from the sine and cosine transform using f ( t ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( τ ) cos ⁡ ( 2 π λ ( τ − t ) ) d τ d λ . {\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .} together with trigonometric identities. This is referred to as Fourier's integral formula. === Spherical harmonics === Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk. Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then f ^ ( ξ ) = F 0 ( | ξ | ) P ( ξ ) {\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )} where F 0 ( r ) = 2 π i − k r − n + 2 k − 2 2 ∫ 0 ∞ f 0 ( s ) J n + 2 k − 2 2 ( 2 π r s ) s n + 2 k 2 d s . {\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order ⁠n + 2k − 2/2⁠. When k = 0 this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. === Restriction problems === In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ ⁠2n + 2/n + 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by: f R ( x ) = ∫ E R f ^ ( ξ ) e i 2 π x ⋅ ξ d ξ , x ∈ R n . {\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp. == Fourier transform on function spaces == The definition of the Fourier transform naturally extends from L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} to L 1 ( R n ) {\displaystyle L^{1}(\mathbb {R} ^{n})} . That is, if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then the Fourier transform F : L 1 ( R n ) → L ∞ ( R n ) {\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})} is given by f ( x ) ↦ f ^ ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x , ∀ ξ ∈ R n . {\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.} This operator is bounded as sup ξ ∈ R n | f ^ ( ξ ) | ≤ ∫ R n | f ( x ) | d x , {\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,} which shows that its operator norm is bounded by 1. The Riemann–Lebesgue lemma shows that if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e., f ^ ∈ C 0 ( R n ) ⊂ L ∞ ( R n ) {\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})} . Furthermore, the image of L 1 {\displaystyle L^{1}} under F {\displaystyle {\mathcal {F}}} is a strict subset of C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} . Similarly to the case of one variable, the Fourier transform can be defined on L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} . The Fourier transform in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e., f ^ ( ξ ) = lim R → ∞ ∫ | x | ≤ R f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx} where the limit is taken in the L2 sense. Furthermore, F : L 2 ( R n ) → L 2 ( R n ) {\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})} is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have ∫ R n f ( x ) F g ( x ) d x = ∫ R n F f ( x ) g ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image of L2(Rn) is itself under the Fourier transform. === On other Lp === For 1 < p < 2 {\displaystyle 1<p<2} , the Fourier transform can be defined on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = ⁠p/p − 1⁠ is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function. === Tempered distributions === One might consider enlarging the domain of the Fourier transform from L 1 + L 2 {\displaystyle L^{1}+L^{2}} by considering generalized functions, or distributions. A distribution on R n {\displaystyle \mathbb {R} ^{n}} is a continuous linear functional on the space C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} is dense in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} by continuity arguments. The strategy is then to consider the action of the Fourier transform on C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} to C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} . In fact the Fourier transform of an element in C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} can not vanish on an open set; see the above discussion on the uncertainty principle. The Fourier transform can also be defined for tempered distributions S ′ ( R n ) {\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})} , dual to the space of Schwartz functions S ( R n ) {\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})} . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence C c ∞ ( R n ) ⊂ S ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})} and: F : C c ∞ ( R n ) → S ( R n ) ∖ C c ∞ ( R n ) . {\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).} The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, let f {\displaystyle f} and g {\displaystyle g} be integrable functions, and let f ^ {\displaystyle {\hat {f}}} and g ^ {\displaystyle {\hat {g}}} be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula, ∫ R n f ^ ( x ) g ( x ) d x = ∫ R n f ( x ) g ^ ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable function f {\displaystyle f} defines (induces) a distribution T f {\displaystyle T_{f}} by the relation T f ( ϕ ) = ∫ R n f ( x ) ϕ ( x ) d x , ∀ ϕ ∈ S ( R n ) . {\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} So it makes sense to define the Fourier transform of a tempered distribution T f ∈ S ′ ( R ) {\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )} by the duality: ⟨ T ^ f , ϕ ⟩ = ⟨ T f , ϕ ^ ⟩ , ∀ ϕ ∈ S ( R n ) . {\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} Extending this to all tempered distributions T {\displaystyle T} gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. == Generalizations == === Fourier–Stieltjes transform on measurable spaces === The Fourier transform of a finite Borel measure μ on Rn is given by the continuous function: μ ^ ( ξ ) = ∫ R n e − i 2 π x ⋅ ξ d μ , {\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,} and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If μ {\displaystyle \mu } is the probability distribution of a random variable X {\displaystyle X} then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, i.e., d μ = f ( x ) d x , {\displaystyle d\mu =f(x)dx,} then μ ^ ( ξ ) = f ^ ( ξ ) , {\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),} and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). === Locally compact abelian groups === The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from G {\displaystyle G} to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by f ^ ( ξ ) = ∫ G ξ ( x ) f ( x ) d μ for any ξ ∈ G ^ . {\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ. The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim) { e k : T → G L 1 ( C ) = C ∗ ∣ k ∈ Z } {\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}} where e k ( x ) = e i 2 π k x {\displaystyle e_{k}(x)=e^{i2\pi kx}} for x ∈ T {\displaystyle x\in T} . The character of such representation, that is the trace of e k ( x ) {\displaystyle e_{k}(x)} for each x ∈ T {\displaystyle x\in T} and k ∈ Z {\displaystyle k\in Z} , is e i 2 π k x {\displaystyle e^{i2\pi kx}} itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function e k ( x ) {\displaystyle e_{k}(x)} of x ∈ T , {\displaystyle x\in T,} and the inner product between two class functions (all functions being class functions since T is abelian) f , g ∈ L 2 ( T , d μ ) {\displaystyle f,g\in L^{2}(T,d\mu )} is defined as ⟨ f , g ⟩ = 1 | T | ∫ [ 0 , 1 ) f ( y ) g ¯ ( y ) d μ ( y ) {\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)} with the normalizing factor | T | = 1 {\displaystyle |T|=1} . The sequence { e k ∣ k ∈ Z } {\displaystyle \{e_{k}\mid k\in Z\}} is an orthonormal basis of the space of class functions L 2 ( T , d μ ) {\displaystyle L^{2}(T,d\mu )} . For any representation V of a finite group G, χ v {\displaystyle \chi _{v}} can be expressed as the span ∑ i ⟨ χ v , χ v i ⟩ χ v i {\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}} ( V i {\displaystyle V_{i}} are the irreps of G), such that ⟨ χ v , χ v i ⟩ = 1 | G | ∑ g ∈ G χ v ( g ) χ ¯ v i ( g ) {\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)} . Similarly for G = T {\displaystyle G=T} and f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ( x ) = ∑ k ∈ Z f ^ ( k ) e k {\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}} . The Pontriagin dual T ^ {\displaystyle {\hat {T}}} is { e k } ( k ∈ Z ) {\displaystyle \{e_{k}\}(k\in Z)} and for f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ^ ( k ) = 1 | T | ∫ [ 0 , 1 ) f ( y ) e − i 2 π k y d y {\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy} is its Fourier transform for e k ∈ T ^ {\displaystyle e_{k}\in {\hat {T}}} . === Gelfand transform === The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by f ∗ ( g ) = f ( g − 1 ) ¯ . {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.) Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by a ↦ ( φ ↦ φ ( a ) ) {\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}} It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform. === Compact non-abelian groups === The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis. Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by ⟨ μ ^ ξ , η ⟩ H σ = ∫ G ⟨ U ¯ g ( σ ) ξ , η ⟩ d μ ( g ) {\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)} where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as d μ = f d λ {\displaystyle d\mu =f\,d\lambda } for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ. The mapping μ ↦ μ ^ {\displaystyle \mu \mapsto {\hat {\mu }}} defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm ‖ E ‖ = sup σ ∈ Σ ‖ E σ ‖ {\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|} is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by f ∗ ( g ) = f ( g − 1 ) ¯ , {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},} and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators. The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then f ( g ) = ∑ σ ∈ Σ d σ tr ⁡ ( f ^ ( σ ) U g ( σ ) ) {\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)} where the summation is understood as convergent in the L2 sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. == Alternatives == In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. == Example == The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function f ( t ) = cos ⁡ ( 2 π 3 t ) e − π t 2 , {\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},} which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product f ( t ) e − i 2 π 3 t , {\displaystyle f(t)e^{-i2\pi 3t},} which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of f ( t ) {\displaystyle f(t)} and Re ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (e^{-i2\pi 3t})} oscillate at the same rate and in phase, whereas f ( t ) {\displaystyle f(t)} and Im ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Im} (e^{-i2\pi 3t})} oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function f ( t ) . {\displaystyle f(t).} To re-enforce an earlier point, the reason for the response at ξ = − 3 {\displaystyle \xi =-3} Hz is because cos ⁡ ( 2 π 3 t ) {\displaystyle \cos(2\pi 3t)} and cos ⁡ ( 2 π ( − 3 ) t ) {\displaystyle \cos(2\pi (-3)t)} are indistinguishable. The transform of e i 2 π 3 t ⋅ e − π t 2 {\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}} would have just one response, whose amplitude is the integral of the smooth envelope: e − π t 2 , {\displaystyle e^{-\pi t^{2}},} whereas Re ⁡ ( f ( t ) ⋅ e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})} is e − π t 2 ( 1 + cos ⁡ ( 2 π 6 t ) ) / 2. {\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} == Applications == Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. === Analysis of differential equations === Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is ∂ 2 y ( x , t ) ∂ 2 x = ∂ y ( x , t ) ∂ t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.} The example we will give, a slightly more difficult one, is the wave equation in one dimension, ∂ 2 y ( x , t ) ∂ 2 x = ∂ 2 y ( x , t ) ∂ 2 t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" y ( x , 0 ) = f ( x ) , ∂ y ( x , 0 ) ∂ t = g ( x ) . {\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y. Fourier's method is as follows. First, note that any function of the forms cos ⁡ ( 2 π ξ ( x ± t ) ) or sin ⁡ ( 2 π ξ ( x ± t ) ) {\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}} satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral y ( x , t ) = ∫ 0 ∞ d ξ [ a + ( ξ ) cos ⁡ ( 2 π ξ ( x + t ) ) + a − ( ξ ) cos ⁡ ( 2 π ξ ( x − t ) ) + b + ( ξ ) sin ⁡ ( 2 π ξ ( x + t ) ) + b − ( ξ ) sin ⁡ ( 2 π ξ ( x − t ) ) ] {\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}} satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x. The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain 2 ∫ − ∞ ∞ y ( x , 0 ) cos ⁡ ( 2 π ξ x ) d x = a + + a − {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}} and 2 ∫ − ∞ ∞ y ( x , 0 ) sin ⁡ ( 2 π ξ x ) d x = b + + b − . {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t sin ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( − a + + a − ) {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)} and 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t cos ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( b + − b − ) . {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ: ξ 2 y ^ ( ξ , f ) = f 2 y ^ ( ξ , f ) . {\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).} This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function, ∬ y ^ ϕ ( ξ , f ) d ξ d f = ∫ s + ϕ ( ξ , ξ ) d ξ + ∫ s − ϕ ( ξ , − ξ ) d ξ , {\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,} where s+, and s−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth): y ( x , 0 ) = ∫ { s + ( ξ ) + s − ( ξ ) } e i 2 π ξ x + 0 d ξ {\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi } and ∂ y ( x , 0 ) ∂ t = ∫ { s + ( ξ ) − s − ( ξ ) } i 2 π ξ e i 2 π ξ x + 0 d ξ . {\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. === Fourier-transform spectroscopy === The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry. === Quantum mechanics === The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that ϕ ( p ) = ∫ d q ψ ( q ) e − i p q / h , {\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},} or, equivalently, ψ ( q ) = ∫ d p ϕ ( p ) e i p q / h . {\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle. The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, the Schrödinger equation for a time-varying wave function in one-dimension, not subject to external forces, is − ∂ 2 ∂ x 2 ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function V(x), the equation becomes − ∂ 2 ∂ x 2 ψ ( x , t ) + V ( x ) ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, ( ∂ 2 ∂ x 2 + 1 ) ψ ( x , t ) = ∂ 2 ∂ t 2 ψ ( x , t ) . {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform F {\displaystyle {\mathcal {F}}} . === Signal processing === The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function R of a function f is defined by R f ( τ ) = lim T → ∞ 1 2 T ∫ − T T f ( t ) f ( t + τ ) d t . {\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lag τ elapsing between the values of f to be correlated. For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, P f ( ξ ) = ∫ − ∞ ∞ R f ( τ ) e − i 2 π ξ τ d τ . {\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. == Other notations == Other common notations for f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} include: f ~ ( ξ ) , F ( ξ ) , F ( f ) ( ξ ) , ( F f ) ( ξ ) , F ( f ) , F { f } , F ( f ( t ) ) , F { f ( t ) } . {\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these: ξ → f , x → t , f → x , f ^ → X . {\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pair f ( x ) ⟺ F f ^ ( ξ ) {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )} can become x ( t ) ⟺ F X ( f ) {\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such as f ⋅ g ^ {\displaystyle {\widehat {f\cdot g}}} or f ′ ^ , {\displaystyle {\widehat {f'}},} which become the more awkward F { f ⋅ g } {\displaystyle {\mathcal {F}}\{f\cdot g\}} and F { f ′ } . {\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbol f {\displaystyle f} may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. f ( k 1 + k 2 ) {\displaystyle f(k_{1}+k_{2})} would refer to the Fourier transform because of the momentum argument, while f ( x 0 + π r → ) {\displaystyle f(x_{0}+\pi {\vec {r}})} would refer to the original function because of the positional argument. Although tildes may be used as in f ~ {\displaystyle {\tilde {f}}} to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as d k ~ = d k ( 2 π ) 3 2 ω {\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}} , so care must be taken. Similarly, f ^ {\displaystyle {\hat {f}}} often denotes the Hilbert transform of f {\displaystyle f} . The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form f ^ ( ξ ) = A ( ξ ) e i φ ( ξ ) {\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}} in terms of the two real functions A(ξ) and φ(ξ) where: A ( ξ ) = | f ^ ( ξ ) | , {\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,} is the amplitude and φ ( ξ ) = arg ⁡ ( f ^ ( ξ ) ) , {\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),} is the phase (see arg function). Then the inverse transform can be written: f ( x ) = ∫ − ∞ ∞ A ( ξ ) e i ( 2 π ξ x + φ ( ξ ) ) d ξ , {\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,} which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, F ( rect ⁡ ( x ) ) = sinc ⁡ ( ξ ) {\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )} is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or F ( f ( x + x 0 ) ) = F ( f ( x ) ) e i 2 π x 0 ξ {\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }} is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0. As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined E ( e i t ⋅ X ) = ∫ e i t ⋅ x d μ X ( x ) . {\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. == Computation methods == The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, f ( x ) , {\displaystyle f(x),} and functions of a discrete variable (i.e. ordered pairs of x {\displaystyle x} and f {\displaystyle f} values). For discrete-valued x , {\displaystyle x,} the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When the sinusoids are harmonically related (i.e. when the x {\displaystyle x} -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT). === Discrete Fourier transforms and fast Fourier transforms === Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm. === Analytic integration of closed-form functions === Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf into Wolfram Alpha. === Numerical integration of closed-form continuous functions === Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach. === Numerical integration of a series of ordered pairs === If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation. == Tables of important Fourier transforms == The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. === Functional relationships, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Square-integrable functions, one-dimensional === The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix). === Distributions, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Two-dimensional functions === === Formulas for general n-dimensional functions === == See also == == Notes == == Citations == == References == == External links == Media related to Fourier transformation at Wikimedia Commons Encyclopedia of Mathematics Weisstein, Eric W. "Fourier Transform". MathWorld. Fourier Transform in Crystallography
Wikipedia/Fourier_transformation
In mathematics, a rigged Hilbert space (Gelfand triple, nested Hilbert space, equipped Hilbert space) is a construction designed to link the distribution and square-integrable aspects of functional analysis. Such spaces were introduced to study spectral theory. They bring together the 'bound state' (eigenvector) and 'continuous spectrum', in one place. Using this notion, a version of the spectral theorem for unbounded operators on Hilbert space can be formulated. "Rigged Hilbert spaces are well known as the structure which provides a proper mathematical meaning to the Dirac formulation of quantum mechanics." == Motivation == A function such as x ↦ e i x , {\displaystyle x\mapsto e^{ix},} is an eigenfunction of the differential operator − i d d x {\displaystyle -i{\frac {d}{dx}}} on the real line R, but isn't square-integrable for the usual (Lebesgue) measure on R. To properly consider this function as an eigenfunction requires some way of stepping outside the strict confines of the Hilbert space theory. This was supplied by the apparatus of distributions, and a generalized eigenfunction theory was developed in the years after 1950. == Definition == A rigged Hilbert space is a pair (H, Φ) with H a Hilbert space, Φ a dense subspace, such that Φ is given a topological vector space structure for which the inclusion map i : Φ → H , {\displaystyle i:\Phi \to H,} is continuous. Identifying H with its dual space H*, the adjoint to i is the map i ∗ : H = H ∗ → Φ ∗ . {\displaystyle i^{*}:H=H^{*}\to \Phi ^{*}.} The duality pairing between Φ and Φ* is then compatible with the inner product on H, in the sense that: ⟨ u , v ⟩ Φ × Φ ∗ = ( u , v ) H {\displaystyle \langle u,v\rangle _{\Phi \times \Phi ^{*}}=(u,v)_{H}} whenever u ∈ Φ ⊂ H {\displaystyle u\in \Phi \subset H} and v ∈ H = H ∗ ⊂ Φ ∗ {\displaystyle v\in H=H^{*}\subset \Phi ^{*}} . In the case of complex Hilbert spaces, we use a Hermitian inner product; it will be complex linear in u (math convention) or v (physics convention), and conjugate-linear (complex anti-linear) in the other variable. The triple ( Φ , H , Φ ∗ ) {\displaystyle (\Phi ,\,\,H,\,\,\Phi ^{*})} is often named the Gelfand triple (after Israel Gelfand). H {\displaystyle H} is referred to as a pivot space. Note that even though Φ is isomorphic to Φ* (via Riesz representation) if it happens that Φ is a Hilbert space in its own right, this isomorphism is not the same as the composition of the inclusion i with its adjoint i* i ∗ i : Φ ⊂ H = H ∗ → Φ ∗ . {\displaystyle i^{*}i:\Phi \subset H=H^{*}\to \Phi ^{*}.} === Functional analysis approach === The concept of rigged Hilbert space places this idea in an abstract functional-analytic framework. Formally, a rigged Hilbert space consists of a Hilbert space H, together with a subspace Φ which carries a finer topology, that is one for which the natural inclusion Φ ⊆ H {\displaystyle \Phi \subseteq H} is continuous. It is no loss to assume that Φ is dense in H for the Hilbert norm. We consider the inclusion of dual spaces H* in Φ*. The latter, dual to Φ in its 'test function' topology, is realised as a space of distributions or generalised functions of some sort, and the linear functionals on the subspace Φ of type ϕ ↦ ⟨ v , ϕ ⟩ {\displaystyle \phi \mapsto \langle v,\phi \rangle } for v in H are faithfully represented as distributions (because we assume Φ dense). Now by applying the Riesz representation theorem we can identify H* with H. Therefore, the definition of rigged Hilbert space is in terms of a sandwich: Φ ⊆ H ⊆ Φ ∗ . {\displaystyle \Phi \subseteq H\subseteq \Phi ^{*}.} The most significant examples are those for which Φ is a nuclear space; this comment is an abstract expression of the idea that Φ consists of test functions and Φ* of the corresponding distributions. An example of a nuclear countably Hilbert space Φ {\displaystyle \Phi } and its dual Φ ∗ {\displaystyle \Phi ^{*}} is the Schwartz space S ( R ) {\displaystyle {\mathcal {S}}(\mathbb {R} )} and the space of tempered distributions S ′ ( R ) {\displaystyle {\mathcal {S}}'(\mathbb {R} )} , respectively, rigging the Hilbert space of square-integrable functions. As such, the rigged Hilbert space is given by S ( R ) ⊂ L 2 ( R ) ⊂ S ′ ( R ) . {\displaystyle {\mathcal {S}}(\mathbb {R} )\subset L^{2}(\mathbb {R} )\subset {\mathcal {S}}'(\mathbb {R} ).} Another example is given by Sobolev spaces: Here (in the simplest case of Sobolev spaces on R n {\displaystyle \mathbb {R} ^{n}} ) H = L 2 ( R n ) , Φ = H s ( R n ) , Φ ∗ = H − s ( R n ) , {\displaystyle H=L^{2}(\mathbb {R} ^{n}),\ \Phi =H^{s}(\mathbb {R} ^{n}),\ \Phi ^{*}=H^{-s}(\mathbb {R} ^{n}),} where s > 0 {\displaystyle s>0} . == See also == Fourier inversion theorem Fourier transform § Tempered distributions Self-adjoint operator § Spectral theorem == Notes == == References ==
Wikipedia/Generalized_eigenfunction
In mathematics, a Colombeau algebra is an algebra of a certain kind containing the space of Schwartz distributions. While in classical distribution theory a general multiplication of distributions is not possible, Colombeau algebras provide a rigorous framework for this. Such a multiplication of distributions has long been believed to be impossible because of L. Schwartz' impossibility result, which basically states that there cannot be a differential algebra containing the space of distributions and preserving the product of continuous functions. However, if one only wants to preserve the product of smooth functions instead such a construction becomes possible, as demonstrated first by Colombeau. As a mathematical tool, Colombeau algebras can be said to combine a treatment of singularities, differentiation and nonlinear operations in one framework, lifting the limitations of distribution theory. These algebras have found numerous applications in the fields of partial differential equations, geophysics, microlocal analysis and general relativity so far . Colombeau algebras are named after French mathematician Jean François Colombeau. == Schwartz' impossibility result == Attempting to embed the space D ′ ( R ) {\displaystyle {\mathcal {D}}'(\mathbb {R} )} of distributions on R {\displaystyle \mathbb {R} } into an associative algebra ( A ( R ) , ∘ , + ) {\displaystyle (A(\mathbb {R} ),\circ ,+)} , the following requirements seem to be natural: D ′ ( R ) {\displaystyle {\mathcal {D}}'(\mathbb {R} )} is linearly embedded into A ( R ) {\displaystyle A(\mathbb {R} )} such that the constant function 1 {\displaystyle 1} becomes the unity in A ( R ) {\displaystyle A(\mathbb {R} )} , There is a partial derivative operator ∂ {\displaystyle \partial } on A ( R ) {\displaystyle A(\mathbb {R} )} which is linear and satisfies the Leibniz rule, the restriction of ∂ {\displaystyle \partial } to D ′ ( R ) {\displaystyle {\mathcal {D}}'(\mathbb {R} )} coincides with the usual partial derivative, the restriction of ∘ {\displaystyle \circ } to C ( R ) × C ( R ) {\displaystyle C(\mathbb {R} )\times C(\mathbb {R} )} coincides with the pointwise product. However, L. Schwartz' result implies that these requirements cannot hold simultaneously. The same is true even if, in 4., one replaces C ( R ) {\displaystyle C(\mathbb {R} )} by C k ( R ) {\displaystyle C^{k}(\mathbb {R} )} , the space of k {\displaystyle k} times continuously differentiable functions. While this result has often been interpreted as saying that a general multiplication of distributions is not possible, in fact it only states that one cannot unrestrictedly combine differentiation, multiplication of continuous functions and the presence of singular objects like the Dirac delta. Colombeau algebras are constructed to satisfy conditions 1.–3. and a condition like 4., but with C ( R ) × C ( R ) {\displaystyle C(\mathbb {R} )\times C(\mathbb {R} )} replaced by C ∞ ( R ) × C ∞ ( R ) {\displaystyle C^{\infty }(\mathbb {R} )\times C^{\infty }(\mathbb {R} )} , i.e., they preserve the product of smooth (infinitely differentiable) functions only. == Basic idea == The Colombeau Algebra is defined as the quotient algebra C M ∞ ( R n ) / C N ∞ ( R n ) . {\displaystyle C_{M}^{\infty }(\mathbb {R} ^{n})/C_{N}^{\infty }(\mathbb {R} ^{n}).} Here the algebra of moderate functions C M ∞ ( R n ) {\displaystyle C_{M}^{\infty }(\mathbb {R} ^{n})} on R n {\displaystyle \mathbb {R} ^{n}} is the algebra of families of smooth regularisations (fε) f : R + → C ∞ ( R n ) {\displaystyle {f:}\mathbb {R} _{+}\to C^{\infty }(\mathbb {R} ^{n})} of smooth functions on R n {\displaystyle \mathbb {R} ^{n}} (where R+ = (0,∞) is the "regularization" parameter ε), such that for all compact subsets K of R n {\displaystyle \mathbb {R} ^{n}} and all multiindices α, there is an N > 0 such that sup x ∈ K | ∂ | α | ( ∂ x 1 ) α 1 ⋯ ( ∂ x n ) α n f ε ( x ) | = O ( ε − N ) ( ε → 0 ) . {\displaystyle \sup _{x\in K}\left|{\frac {\partial ^{|\alpha |}}{(\partial x_{1})^{\alpha _{1}}\cdots (\partial x_{n})^{\alpha _{n}}}}f_{\varepsilon }(x)\right|=O(\varepsilon ^{-N})\qquad (\varepsilon \to 0).} The ideal C N ∞ ( R n ) {\displaystyle C_{N}^{\infty }(\mathbb {R} ^{n})} of negligible functions is defined in the same way but with the partial derivatives instead bounded by O(ε+N) for all N > 0. == Embedding of distributions == The space(s) of Schwartz distributions can be embedded into the simplified algebra by (component-wise) convolution with any element of the algebra having as representative a δ-net, i.e. a family of smooth functions φ ε {\displaystyle \varphi _{\varepsilon }} such that φ ε → δ {\displaystyle \varphi _{\varepsilon }\to \delta } in D' as ε → 0. This embedding is non-canonical, because it depends on the choice of the δ-net. However, there are versions of Colombeau algebras (so called full algebras) which allow for canonical embeddings of distributions. A well known full version is obtained by adding the mollifiers as second indexing set. == See also == Generalized function == Notes == == References == Colombeau, J. F., New Generalized Functions and Multiplication of the Distributions. North Holland, Amsterdam, 1984. Colombeau, J. F., Elementary introduction to new generalized functions. North-Holland, Amsterdam, 1985. Nedeljkov, M., Pilipović, S., Scarpalezos, D., Linear Theory of Colombeau's Generalized Functions, Addison Wesley, Longman, 1998. Grosser, M., Kunzinger, M., Oberguggenberger, M., Steinbauer, R.; Geometric Theory of Generalized Functions with Applications to General Relativity, Springer Series Mathematics and Its Applications, Vol. 537, 2002; ISBN 978-1-4020-0145-1.
Wikipedia/Colombeau_algebra
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to how x is thought of as an unknown number solving, e.g., an algebraic equation like x2 − 3x + 2 = 0. However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000. Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics (Schrödinger equation, Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology. Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields. Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations. == Introduction == A function u(x, y, z) of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=0.} Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance u ( x , y , z ) = 1 x 2 − 2 x + y 2 + z 2 + 1 {\displaystyle u(x,y,z)={\frac {1}{\sqrt {x^{2}-2x+y^{2}+z^{2}+1}}}} and u ( x , y , z ) = 2 x 2 − y 2 − z 2 {\displaystyle u(x,y,z)=2x^{2}-y^{2}-z^{2}} are both harmonic while u ( x , y , z ) = sin ⁡ ( x y ) + z {\displaystyle u(x,y,z)=\sin(xy)+z} is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist. The nature of this failure can be seen more concretely in the case of the following PDE: for a function v(x, y) of two variables, consider the equation ∂ 2 v ∂ x ∂ y = 0. {\displaystyle {\frac {\partial ^{2}v}{\partial x\partial y}}=0.} It can be directly checked that any function v of the form v(x, y) = f(x) + g(y), for any single-variable functions f and g whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions. The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate. To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself. The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions. Let B denote the unit-radius disk around the origin in the plane. For any continuous function U on the unit circle, there is exactly one function u on B such that ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 = 0 {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0} and whose restriction to the unit circle is given by U. For any functions f and g on the real line R, there is exactly one function u on R × (−1, 1) such that ∂ 2 u ∂ x 2 − ∂ 2 u ∂ y 2 = 0 {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}-{\frac {\partial ^{2}u}{\partial y^{2}}}=0} and with u(x, 0) = f(x) and ⁠∂u/∂y⁠(x, 0) = g(x) for all values of x. Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function. If u is a function on R2 with ∂ ∂ x ∂ u ∂ x 1 + ( ∂ u ∂ x ) 2 + ( ∂ u ∂ y ) 2 + ∂ ∂ y ∂ u ∂ y 1 + ( ∂ u ∂ x ) 2 + ( ∂ u ∂ y ) 2 = 0 , {\displaystyle {\frac {\partial }{\partial x}}{\frac {\frac {\partial u}{\partial x}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}+{\frac {\partial }{\partial y}}{\frac {\frac {\partial u}{\partial y}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}=0,} then there are numbers a, b, and c with u(x, y) = ax + by + c. In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution. == Definition == A partial differential equation is an equation that involves an unknown function of n ≥ 2 {\displaystyle n\geq 2} variables and (some of) its partial derivatives. That is, for the unknown function u : U → R , {\displaystyle u:U\rightarrow \mathbb {R} ,} of variables x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\dots ,x_{n})} belonging to the open subset U {\displaystyle U} of R n {\displaystyle \mathbb {R} ^{n}} , the k t h {\displaystyle k^{th}} -order partial differential equation is defined as F [ D k u , D k − 1 u , … , D u , u , x ] = 0 , {\displaystyle F[D^{k}u,D^{k-1}u,\dots ,Du,u,x]=0,} where F : R n k × R n k − 1 ⋯ × R n × R × U → R , {\displaystyle F:\mathbb {R} ^{n^{k}}\times \mathbb {R} ^{n^{k-1}}\dots \times \mathbb {R} ^{n}\times \mathbb {R} \times U\rightarrow \mathbb {R} ,} and D {\displaystyle D} is the partial derivative operator. === Notation === When writing PDEs, it is common to denote partial derivatives using subscripts. For example: u x = ∂ u ∂ x , u x x = ∂ 2 u ∂ x 2 , u x y = ∂ 2 u ∂ y ∂ x = ∂ ∂ y ( ∂ u ∂ x ) . {\displaystyle u_{x}={\frac {\partial u}{\partial x}},\quad u_{xx}={\frac {\partial ^{2}u}{\partial x^{2}}},\quad u_{xy}={\frac {\partial ^{2}u}{\partial y\,\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial u}{\partial x}}\right).} In the general situation that u is a function of n variables, then ui denotes the first partial derivative relative to the i-th input, uij denotes the second partial derivative relative to the i-th and j-th inputs, and so on. The Greek letter Δ denotes the Laplace operator; if u is a function of n variables, then Δ u = u 11 + u 22 + ⋯ + u n n . {\displaystyle \Delta u=u_{11}+u_{22}+\cdots +u_{nn}.} In the physics literature, the Laplace operator is often denoted by ∇2; in the mathematics literature, ∇2u may also denote the Hessian matrix of u. == Classification == === Linear and nonlinear equations === A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function u of x and y, a second order linear PDE is of the form a 1 ( x , y ) u x x + a 2 ( x , y ) u x y + a 3 ( x , y ) u y x + a 4 ( x , y ) u y y + a 5 ( x , y ) u x + a 6 ( x , y ) u y + a 7 ( x , y ) u = f ( x , y ) {\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+a_{5}(x,y)u_{x}+a_{6}(x,y)u_{y}+a_{7}(x,y)u=f(x,y)} where ai and f are functions of the independent variables x and y only. (Often the mixed-partial derivatives uxy and uyx will be equated, but this is not required for the discussion of linearity.) If the ai are constants (independent of x and y) then the PDE is called linear with constant coefficients. If f is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. (This is separate from asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.) Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is a 1 ( x , y ) u x x + a 2 ( x , y ) u x y + a 3 ( x , y ) u y x + a 4 ( x , y ) u y y + f ( u x , u y , u , x , y ) = 0 {\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0} In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives: a 1 ( u x , u y , u , x , y ) u x x + a 2 ( u x , u y , u , x , y ) u x y + a 3 ( u x , u y , u , x , y ) u y x + a 4 ( u x , u y , u , x , y ) u y y + f ( u x , u y , u , x , y ) = 0 {\displaystyle a_{1}(u_{x},u_{y},u,x,y)u_{xx}+a_{2}(u_{x},u_{y},u,x,y)u_{xy}+a_{3}(u_{x},u_{y},u,x,y)u_{yx}+a_{4}(u_{x},u_{y},u,x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0} Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion. A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry. === Second order equations === The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming uxy = uyx, the general linear second-order PDE in two independent variables has the form A u x x + 2 B u x y + C u y y + ⋯ (lower order terms) = 0 , {\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+\cdots {\mbox{(lower order terms)}}=0,} where the coefficients A, B, C... may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy-plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section: A x 2 + 2 B x y + C y 2 + ⋯ = 0. {\displaystyle Ax^{2}+2Bxy+Cy^{2}+\cdots =0.} More precisely, replacing ∂x by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification. Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B2 − 4AC, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B2 − AC due to the convention of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic form) is (2B)2 − 4AC = 4(B2 − AC), with the factor of 4 dropped for simplicity. B2 − AC < 0 (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x < 0. By change of variables, the equation can always be expressed in the form: u x x + u y y + ⋯ = 0 , {\displaystyle u_{xx}+u_{yy}+\cdots =0,} where x and y correspond to changed variables. This justifies Laplace equation as an example of this type. B2 − AC = 0 (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x = 0. By change of variables, the equation can always be expressed in the form: u x x + ⋯ = 0 , {\displaystyle u_{xx}+\cdots =0,} where x correspond to changed variables. This justifies heat equation, which are of form u t − u x x + ⋯ = 0 {\textstyle u_{t}-u_{xx}+\cdots =0} , as an example of this type. B2 − AC > 0 (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x > 0. By change of variables, the equation can always be expressed in the form: u x x − u y y + ⋯ = 0 , {\displaystyle u_{xx}-u_{yy}+\cdots =0,} where x and y correspond to changed variables. This justifies wave equation as an example of this type. If there are n independent variables x1, x2 , …, xn, a general linear partial differential equation of second order has the form L u = ∑ i = 1 n ∑ j = 1 n a i , j ∂ 2 u ∂ x i ∂ x j + lower-order terms = 0. {\displaystyle Lu=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{i,j}{\frac {\partial ^{2}u}{\partial x_{i}\partial x_{j}}}\quad +{\text{lower-order terms}}=0.} The classification depends upon the signature of the eigenvalues of the coefficient matrix ai,j. Elliptic: the eigenvalues are all positive or all negative. Parabolic: the eigenvalues are all positive or all negative, except one that is zero. Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative. Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation. However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized. === Systems of first-order equations and characteristic surfaces === The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for ν = 1, 2, …, n. The partial differential equation takes the form L u = ∑ ν = 1 n A ν ∂ u ∂ x ν + B = 0 , {\displaystyle Lu=\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial u}{\partial x_{\nu }}}+B=0,} where the coefficient matrices Aν and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form φ ( x 1 , x 2 , … , x n ) = 0 , {\displaystyle \varphi (x_{1},x_{2},\ldots ,x_{n})=0,} where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes: Q ( ∂ φ ∂ x 1 , … , ∂ φ ∂ x n ) = det [ ∑ ν = 1 n A ν ∂ φ ∂ x ν ] = 0. {\displaystyle Q\left({\frac {\partial \varphi }{\partial x_{1}}},\ldots ,{\frac {\partial \varphi }{\partial x_{n}}}\right)=\det \left[\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial \varphi }{\partial x_{\nu }}}\right]=0.} The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do not determine the normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S. A first-order system Lu = 0 is elliptic if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S. A first-order system is hyperbolic at a point if there is a spacelike surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation Q(λξ + η) = 0 has m real roots λ1, λ2, …, λm. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ) = 0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has nm sheets, and the axis ζ = λξ runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets. == Analytical solutions == === Separation of variables === Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem. In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately. This generalizes to the method of characteristics, and is also used in integral transforms. === Method of characteristics === The characteristic surface in n = 2-dimensional space is called a characteristic curve. In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics. More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces. === Integral transform === An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator. An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. === Change of variables === Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation ∂ V ∂ t + 1 2 σ 2 S 2 ∂ 2 V ∂ S 2 + r S ∂ V ∂ S − r V = 0 {\displaystyle {\frac {\partial V}{\partial t}}+{\tfrac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0} is reducible to the heat equation ∂ u ∂ τ = ∂ 2 u ∂ x 2 {\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {\partial ^{2}u}{\partial x^{2}}}} by the change of variables V ( S , t ) = v ( x , τ ) , x = ln ⁡ ( S ) , τ = 1 2 σ 2 ( T − t ) , v ( x , τ ) = e − α x − β τ u ( x , τ ) . {\displaystyle {\begin{aligned}V(S,t)&=v(x,\tau ),\\[5px]x&=\ln \left(S\right),\\[5px]\tau &={\tfrac {1}{2}}\sigma ^{2}(T-t),\\[5px]v(x,\tau )&=e^{-\alpha x-\beta \tau }u(x,\tau ).\end{aligned}}} === Fundamental solution === Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source P ( D ) u = δ {\displaystyle P(D)u=\delta } ), then taking the convolution with the boundary conditions to get the solution. This is analogous in signal processing to understanding a filter by its impulse response. === Superposition principle === The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin x + sin x = 2 sin x. The same principle can be observed in PDEs where the solutions may be real or complex and additive. If u1 and u2 are solutions of linear PDE in some function space R, then u = c1u1 + c2u2 with any constants c1 and c2 are also a solution of that PDE in the same function space. === Methods for non-linear equations === There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems. The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations. In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers. === Lie group method === From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact. A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. === Semi-analytical methods === The Adomian decomposition method, the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality. == Numerical solutions == The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc. === Finite element method === The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc. === Finite difference method === Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives. === Finite volume method === Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design. === Neural networks === == Weak solutions == Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions. An example for the definition of a weak solution is as follows: Consider the boundary-value problem given by: L u = f in U , u = 0 on ∂ U , {\displaystyle {\begin{aligned}Lu&=f\quad {\text{in }}U,\\u&=0\quad {\text{on }}\partial U,\end{aligned}}} where L u = − ∑ i , j ∂ j ( a i j ∂ i u ) + ∑ i b i ∂ i u + c u {\displaystyle Lu=-\sum _{i,j}\partial _{j}(a^{ij}\partial _{i}u)+\sum _{i}b^{i}\partial _{i}u+cu} denotes a second-order partial differential operator in divergence form. We say a u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} is a weak solution if ∫ U [ ∑ i , j a i j ( ∂ i u ) ( ∂ j v ) + ∑ i b i ( ∂ i u ) v + c u v ] d x = ∫ U f v d x {\displaystyle \int _{U}[\sum _{i,j}a^{ij}(\partial _{i}u)(\partial _{j}v)+\sum _{i}b^{i}(\partial _{i}u)v+cuv]dx=\int _{U}fvdx} for every v ∈ H 0 1 ( U ) {\displaystyle v\in H_{0}^{1}(U)} , which can be derived by a formal integral by parts. An example for a weak solution is as follows: ϕ ( x ) = 1 4 π 1 | x | {\displaystyle \phi (x)={\frac {1}{4\pi }}{\frac {1}{|x|}}} is a weak solution satisfying ∇ 2 ϕ = δ in R 3 {\displaystyle \nabla ^{2}\phi =\delta {\text{ in }}R^{3}} in distributional sense, as formally, ∫ R 3 ∇ 2 ϕ ( x ) ψ ( x ) d x = ∫ R 3 ϕ ( x ) ∇ 2 ψ ( x ) d x = ψ ( 0 ) for ψ ∈ C c ∞ ( R 3 ) . {\displaystyle \int _{R^{3}}\nabla ^{2}\phi (x)\psi (x)dx=\int _{R^{3}}\phi (x)\nabla ^{2}\psi (x)dx=\psi (0){\text{ for }}\psi \in C_{c}^{\infty }(R^{3}).} == Theoretical Studies == As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary. === Well-posedness === Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have: an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE by continuously changing the free choices, one continuously changes the corresponding solution This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed. === Regularity === Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces. This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution. Results from functional analysis are often used in this field of study. == See also == Some common PDEs Acoustic wave equation Burgers' equation Continuity equation Heat equation Helmholtz equation Klein–Gordon equation Jacobi equation Lagrange equation Lorenz equation Laplace's equation Maxwell's equations Navier-Stokes equation Poisson's equation Reaction–diffusion system Schrödinger equation Wave equation Types of boundary conditions Dirichlet boundary condition Neumann boundary condition Robin boundary condition Cauchy problem Various topics Jet bundle Laplace transform applied to differential equations List of dynamical systems and differential equations topics Matrix differential equation Numerical partial differential equations Partial differential algebraic equation Recurrence relation Stochastic processes and boundary value problems == Notes == == References == == Further reading == Cajori, Florian (1928). "The Early History of Partial Differential Equations and of Partial Differentiation and Integration" (PDF). The American Mathematical Monthly. 35 (9): 459–467. doi:10.2307/2298771. JSTOR 2298771. Archived from the original (PDF) on 2018-11-23. Retrieved 2016-05-15. Nirenberg, Louis (1994). "Partial differential equations in the first half of the century." Development of mathematics 1900–1950 (Luxembourg, 1992), 479–515, Birkhäuser, Basel. Brezis, Haïm; Browder, Felix (1998). "Partial Differential Equations in the 20th Century". Advances in Mathematics. 135 (1): 76–144. doi:10.1006/aima.1997.1713. == External links == "Differential equation, partial", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Partial Differential Equations: Exact Solutions at EqWorld: The World of Mathematical Equations. Partial Differential Equations: Index at EqWorld: The World of Mathematical Equations. Partial Differential Equations: Methods at EqWorld: The World of Mathematical Equations. Example problems with solutions at exampleproblems.com Partial Differential Equations at mathworld.wolfram.com Partial Differential Equations with Mathematica Partial Differential Equations in Cleve Moler: Numerical Computing with MATLAB Partial Differential Equations at nag.com Sanderson, Grant (April 21, 2019). "But what is a partial differential equation?". 3Blue1Brown. Archived from the original on 2021-11-02 – via YouTube.
Wikipedia/Partial_differential_equation_theory
In mathematics, hyperfunctions are generalizations of functions, as a 'jump' from one holomorphic function to another at a boundary, and can be thought of informally as distributions of infinite order. Hyperfunctions were introduced by Mikio Sato in 1958 in Japanese, (1959, 1960 in English), building upon earlier work by Laurent Schwartz, Grothendieck and others. == Formulation == A hyperfunction on the real line can be conceived of as the 'difference' between one holomorphic function defined on the upper half-plane and another on the lower half-plane. That is, a hyperfunction is specified by a pair (f, g), where f is a holomorphic function on the upper half-plane and g is a holomorphic function on the lower half-plane. Informally, the hyperfunction is what the difference f − g {\displaystyle f-g} would be at the real line itself. This difference is not affected by adding the same holomorphic function to both f and g, so if h is a holomorphic function on the whole complex plane, the hyperfunctions (f, g) and (f + h, g + h) are defined to be equivalent. === Definition in one dimension === The motivation can be concretely implemented using ideas from sheaf cohomology. Let O {\displaystyle {\mathcal {O}}} be the sheaf of holomorphic functions on C . {\displaystyle \mathbb {C} .} Define the hyperfunctions on the real line as the first local cohomology group: B ( R ) = H R 1 ( C , O ) . {\displaystyle {\mathcal {B}}(\mathbb {R} )=H_{\mathbb {R} }^{1}(\mathbb {C} ,{\mathcal {O}}).} Concretely, let C + {\displaystyle \mathbb {C} ^{+}} and C − {\displaystyle \mathbb {C} ^{-}} be the upper half-plane and lower half-plane respectively. Then C + ∪ C − = C ∖ R {\displaystyle \mathbb {C} ^{+}\cup \mathbb {C} ^{-}=\mathbb {C} \setminus \mathbb {R} } so H R 1 ( C , O ) = [ H 0 ( C + , O ) ⊕ H 0 ( C − , O ) ] / H 0 ( C , O ) . {\displaystyle H_{\mathbb {R} }^{1}(\mathbb {C} ,{\mathcal {O}})=\left[H^{0}(\mathbb {C} ^{+},{\mathcal {O}})\oplus H^{0}(\mathbb {C} ^{-},{\mathcal {O}})\right]/H^{0}(\mathbb {C} ,{\mathcal {O}}).} Since the zeroth cohomology group of any sheaf is simply the global sections of that sheaf, we see that a hyperfunction is a pair of holomorphic functions one each on the upper and lower complex halfplane modulo entire holomorphic functions. More generally one can define B ( U ) {\displaystyle {\mathcal {B}}(U)} for any open set U ⊆ R {\displaystyle U\subseteq \mathbb {R} } as the quotient H 0 ( U ~ ∖ U , O ) / H 0 ( U ~ , O ) {\displaystyle H^{0}({\tilde {U}}\setminus U,{\mathcal {O}})/H^{0}({\tilde {U}},{\mathcal {O}})} where U ~ ⊆ C {\displaystyle {\tilde {U}}\subseteq \mathbb {C} } is any open set with U ~ ∩ R = U {\displaystyle {\tilde {U}}\cap \mathbb {R} =U} . One can show that this definition does not depend on the choice of U ~ {\displaystyle {\tilde {U}}} giving another reason to think of hyperfunctions as "boundary values" of holomorphic functions. == Examples == If f is any holomorphic function on the whole complex plane, then the restriction of f to the real axis is a hyperfunction, represented by either (f, 0) or (0, −f). The Heaviside step function can be represented as H ( x ) = ( 1 − 1 2 π i log ⁡ ( z ) , − 1 2 π i log ⁡ ( z ) ) . {\displaystyle H(x)=\left(1-{\frac {1}{2\pi i}}\log(z),-{\frac {1}{2\pi i}}\log(z)\right).} where log ⁡ ( z ) {\displaystyle \log(z)} is the principal value of the complex logarithm of z. The Dirac delta "function" is represented by ( 1 2 π i z , 1 2 π i z ) . {\displaystyle \left({\dfrac {1}{2\pi iz}},{\dfrac {1}{2\pi iz}}\right).} This is really a restatement of Cauchy's integral formula. To verify it one can calculate the integration of f just below the real line, and subtract integration of g just above the real line - both from left to right. Note that the hyperfunction can be non-trivial, even if the components are analytic continuation of the same function. Also this can be easily checked by differentiating the Heaviside function. If g is a continuous function (or more generally a distribution) on the real line with support contained in a bounded interval I, then g corresponds to the hyperfunction (f, −f), where f is a holomorphic function on the complement of I defined by f ( z ) = 1 2 π i ∫ x ∈ I g ( x ) 1 z − x d x . {\displaystyle f(z)={\frac {1}{2\pi i}}\int _{x\in I}g(x){\frac {1}{z-x}}\,dx.} This function f jumps in value by g(x) when crossing the real axis at the point x. The formula for f follows from the previous example by writing g as the convolution of itself with the Dirac delta function. Using a partition of unity one can write any continuous function (distribution) as a locally finite sum of functions (distributions) with compact support. This can be exploited to extend the above embedding to an embedding D ′ ( R ) → B ( R ) . {\displaystyle \textstyle {\mathcal {D}}'(\mathbb {R} )\to {\mathcal {B}}(\mathbb {R} ).} If f is any function that is holomorphic everywhere except for an essential singularity at 0 (for example, e1/z), then ( f , − f ) {\displaystyle (f,-f)} is a hyperfunction with support 0 that is not a distribution. If f has a pole of finite order at 0 then ( f , − f ) {\displaystyle (f,-f)} is a distribution, so when f has an essential singularity then ( f , − f ) {\displaystyle (f,-f)} looks like a "distribution of infinite order" at 0. (Note that distributions always have finite order at any point.) == Operations on hyperfunctions == Let U ⊆ R {\displaystyle U\subseteq \mathbb {R} } be any open subset. By definition B ( U ) {\displaystyle {\mathcal {B}}(U)} is a vector space such that addition and multiplication with complex numbers are well-defined. Explicitly: a ( f + , f − ) + b ( g + , g − ) := ( a f + + b g + , a f − + b g − ) {\displaystyle a(f_{+},f_{-})+b(g_{+},g_{-}):=(af_{+}+bg_{+},af_{-}+bg_{-})} The obvious restriction maps turn B {\displaystyle {\mathcal {B}}} into a sheaf (which is in fact flabby). Multiplication with real analytic functions h ∈ O ( U ) {\displaystyle h\in {\mathcal {O}}(U)} and differentiation are well-defined: h ( f + , f − ) := ( h f + , h f − ) d d z ( f + , f − ) := ( d f + d z , d f − d z ) {\displaystyle {\begin{aligned}h(f_{+},f_{-})&:=(hf_{+},hf_{-})\\[6pt]{\frac {d}{dz}}(f_{+},f_{-})&:=\left({\frac {df_{+}}{dz}},{\frac {df_{-}}{dz}}\right)\end{aligned}}} With these definitions B ( U ) {\displaystyle {\mathcal {B}}(U)} becomes a D-module and the embedding D ′ ↪ B {\displaystyle {\mathcal {D}}'\hookrightarrow {\mathcal {B}}} is a morphism of D-modules. A point a ∈ U {\displaystyle a\in U} is called a holomorphic point of f ∈ B ( U ) {\displaystyle f\in {\mathcal {B}}(U)} if f {\displaystyle f} restricts to a real analytic function in some small neighbourhood of a . {\displaystyle a.} If a ⩽ b {\displaystyle a\leqslant b} are two holomorphic points, then integration is well-defined: ∫ a b f := − ∫ γ + f + ( z ) d z + ∫ γ − f − ( z ) d z {\displaystyle \int _{a}^{b}f:=-\int _{\gamma _{+}}f_{+}(z)\,dz+\int _{\gamma _{-}}f_{-}(z)\,dz} where γ ± : [ 0 , 1 ] → C ± {\displaystyle \gamma _{\pm }:[0,1]\to \mathbb {C} ^{\pm }} are arbitrary curves with γ ± ( 0 ) = a , γ ± ( 1 ) = b . {\displaystyle \gamma _{\pm }(0)=a,\gamma _{\pm }(1)=b.} The integrals are independent of the choice of these curves because the upper and lower half plane are simply connected. Let B c ( U ) {\displaystyle {\mathcal {B}}_{c}(U)} be the space of hyperfunctions with compact support. Via the bilinear form { B c ( U ) × O ( U ) → C ( f , φ ) ↦ ∫ f ⋅ φ {\displaystyle {\begin{cases}{\mathcal {B}}_{c}(U)\times {\mathcal {O}}(U)\to \mathbb {C} \\(f,\varphi )\mapsto \int f\cdot \varphi \end{cases}}} one associates to each hyperfunction with compact support a continuous linear function on O ( U ) . {\displaystyle {\mathcal {O}}(U).} This induces an identification of the dual space, O ′ ( U ) , {\displaystyle {\mathcal {O}}'(U),} with B c ( U ) . {\displaystyle {\mathcal {B}}_{c}(U).} A special case worth considering is the case of continuous functions or distributions with compact support: If one considers C c 0 ( U ) {\displaystyle C_{c}^{0}(U)} (or E ′ ( U ) {\displaystyle {\mathcal {E}}'(U)} ) as a subset of B ( U ) {\displaystyle {\mathcal {B}}(U)} via the above embedding, then this computes exactly the traditional Lebesgue-integral. Furthermore: If u ∈ E ′ ( U ) {\displaystyle u\in {\mathcal {E}}'(U)} is a distribution with compact support, φ ∈ O ( U ) {\displaystyle \varphi \in {\mathcal {O}}(U)} is a real analytic function, and supp ⁡ ( u ) ⊂ ( a , b ) {\displaystyle \operatorname {supp} (u)\subset (a,b)} then ∫ a b u ⋅ φ = ⟨ u , φ ⟩ . {\displaystyle \int _{a}^{b}u\cdot \varphi =\langle u,\varphi \rangle .} Thus this notion of integration gives a precise meaning to formal expressions like ∫ a b δ ( x ) d x {\displaystyle \int _{a}^{b}\delta (x)\,dx} which are undefined in the usual sense. Moreover: Because the real analytic functions are dense in E ( U ) , E ′ ( U ) {\displaystyle {\mathcal {E}}(U),{\mathcal {E}}'(U)} is a subspace of O ′ ( U ) {\displaystyle {\mathcal {O}}'(U)} . This is an alternative description of the same embedding E ′ ↪ B {\displaystyle {\mathcal {E}}'\hookrightarrow {\mathcal {B}}} . If Φ : U → V {\displaystyle \Phi :U\to V} is a real analytic map between open sets of R {\displaystyle \mathbb {R} } , then composition with Φ {\displaystyle \Phi } is a well-defined operator from B ( V ) {\displaystyle {\mathcal {B}}(V)} to B ( U ) {\displaystyle {\mathcal {B}}(U)} : f ∘ Φ := ( f + ∘ Φ , f − ∘ Φ ) {\displaystyle f\circ \Phi :=(f_{+}\circ \Phi ,f_{-}\circ \Phi )} == See also == == References == Imai, Isao (2012) [1992], Applied Hyperfunction Theory, Mathematics and its Applications (Book 8), Springer, ISBN 978-94-010-5125-5. Kaneko, Akira (1988), Introduction to the Theory of Hyperfunctions, Mathematics and its Applications (Japanese Series, Vol. 3), Springer, ISBN 978-90-277-2837-1 Kashiwara, Masaki; Kawai, Takahiro; Kimura, Tatsuo (2017) [1986], Foundations of Algebraic Analysis, Princeton Legacy Library (Book 5158), vol. PMS-37, translated by Kato, Goro (Reprint ed.), Princeton University Press, ISBN 978-0-691-62832-5 Komatsu, Hikosaburo, ed. (1973), Hyperfunctions and Pseudo-Differential Equations, Proceedings of a Conference at Katata, 1971, Lecture Notes in Mathematics 287, Springer, ISBN 978-3-540-06218-9. Komatsu, Hikosaburo, Relative cohomology of sheaves of solutions of differential equations, pp. 192–261. Sato, Mikio; Kawai, Takahiro; Kashiwara, Masaki, Microfunctions and pseudo-differential equations, pp. 265–529. - It is called SKK. Martineau, André (1960–1961), Les hyperfonctions de M. Sato, Séminaire Bourbaki, Tome 6 (1960-1961), Exposé no. 214, MR 1611794, Zbl 0122.34902. Morimoto, Mitsuo (1993), An Introduction to Sato's Hyperfunctions, Translations of Mathematical Monographs (Book 129), American Mathematical Society, ISBN 978-0-82184571-4. Pham, F. L., ed. (1975), Hyperfunctions and Theoretical Physics, Rencontre de Nice, 21-30 Mai 1973, Lecture Notes in Mathematics 449, Springer, ISBN 978-3-540-37454-1. Cerezo, A.; Piriou, A.; Chazarain, J., Introduction aux hyperfonctions, pp. 1–53. Sato, Mikio (1958), "Cyōkansū no riron (Theory of Hyperfunctions)", Sūgaku (in Japanese), 10 (1), Mathematical Society of Japan: 1–27, doi:10.11429/sugaku1947.10.1, ISSN 0039-470X Sato, Mikio (1959), "Theory of Hyperfunctions, I", Journal of the Faculty of Science, University of Tokyo. Sect. 1, Mathematics, Astronomy, Physics, Chemistry, 8 (1): 139–193, hdl:2261/6027, MR 0114124. Sato, Mikio (1960), "Theory of Hyperfunctions, II", Journal of the Faculty of Science, University of Tokyo. Sect. 1, Mathematics, Astronomy, Physics, Chemistry, 8 (2): 387–437, hdl:2261/6031, MR 0132392. Schapira, Pierre (1970), Theories des Hyperfonctions, Lecture Notes in Mathematics 126, Springer, ISBN 978-3-540-04915-9. Schlichtkrull, Henrik (2013) [1984], Hyperfunctions and Harmonic Analysis on Symmetric Spaces, Progress in Mathematics (Softcover reprint of the original 1st ed.), Springer, ISBN 978-1-4612-9775-8 == External links == Jacobs, Bryan. "Hyperfunction". MathWorld. Kaneko, A. (2001) [1994], "Hyperfunction", Encyclopedia of Mathematics, EMS Press
Wikipedia/Hyperfunction
Algebraic analysis is an area of mathematics that deals with systems of linear partial differential equations by using sheaf theory and complex analysis to study properties and generalizations of functions such as hyperfunctions and microfunctions. Semantically, it is the application of algebraic operations on analytic quantities. As a research programme, it was started by the Japanese mathematician Mikio Sato in 1959. This can be seen as an algebraic geometrization of analysis. According to Schapira, parts of Sato's work can be regarded as a manifestation of Grothendieck's style of mathematics within the realm of classical analysis. It derives its meaning from the fact that the differential operator is right-invertible in several function spaces. It helps in the simplification of the proofs due to an algebraic description of the problem considered. == Microfunction == Let M be a real-analytic manifold of dimension n, and let X be its complexification. The sheaf of microlocal functions on M is given as H n ( μ M ( O X ) ⊗ o r M / X ) {\displaystyle {\mathcal {H}}^{n}(\mu _{M}({\mathcal {O}}_{X})\otimes {\mathcal {or}}_{M/X})} where μ M {\displaystyle \mu _{M}} denotes the microlocalization functor, o r M / X {\displaystyle {\mathcal {or}}_{M/X}} is the relative orientation sheaf. A microfunction can be used to define a Sato's hyperfunction. By definition, the sheaf of Sato's hyperfunctions on M is the restriction of the sheaf of microfunctions to M, in parallel to the fact the sheaf of real-analytic functions on M is the restriction of the sheaf of holomorphic functions on X to M. == See also == Hyperfunction D-module Microlocal analysis Generalized function Edge-of-the-wedge theorem FBI transform Localization of a ring Vanishing cycle Gauss–Manin connection Differential algebra Perverse sheaf Mikio Sato Masaki Kashiwara Lars Hörmander == Citations == == Sources == == Further reading == Masaki Kashiwara and Algebraic Analysis Archived 25 February 2012 at the Wayback Machine Foundations of algebraic analysis book review
Wikipedia/Algebraic_analysis
In mathematics, a Schwartz–Bruhat function, named after Laurent Schwartz and François Bruhat, is a complex valued function on a locally compact abelian group, such as the adeles, that generalizes a Schwartz function on a real vector space. A tempered distribution is defined as a continuous linear functional on the space of Schwartz–Bruhat functions. == Definitions == On a real vector space R n {\displaystyle \mathbb {R} ^{n}} , the Schwartz–Bruhat functions are just the usual Schwartz functions (all derivatives rapidly decreasing) and form the space S ( R n ) {\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})} . On a torus, the Schwartz–Bruhat functions are the smooth functions. On a sum of copies of the integers, the Schwartz–Bruhat functions are the rapidly decreasing functions. On an elementary group (i.e., an abelian locally compact group that is a product of copies of the reals, the integers, the circle group, and finite groups), the Schwartz–Bruhat functions are the smooth functions all of whose derivatives are rapidly decreasing. On a general locally compact abelian group G {\displaystyle G} , let A {\displaystyle A} be a compactly generated subgroup, and B {\displaystyle B} a compact subgroup of A {\displaystyle A} such that A / B {\displaystyle A/B} is elementary. Then the pullback of a Schwartz–Bruhat function on A / B {\displaystyle A/B} is a Schwartz–Bruhat function on G {\displaystyle G} , and all Schwartz–Bruhat functions on G {\displaystyle G} are obtained like this for suitable A {\displaystyle A} and B {\displaystyle B} . (The space of Schwartz–Bruhat functions on G {\displaystyle G} is endowed with the inductive limit topology.) On a non-archimedean local field K {\displaystyle K} , a Schwartz–Bruhat function is a locally constant function of compact support. In particular, on the ring of adeles A K {\displaystyle \mathbb {A} _{K}} over a global field K {\displaystyle K} , the Schwartz–Bruhat functions f {\displaystyle f} are finite linear combinations of the products ∏ v f v {\displaystyle \prod _{v}f_{v}} over each place v {\displaystyle v} of K {\displaystyle K} , where each f v {\displaystyle f_{v}} is a Schwartz–Bruhat function on a local field K v {\displaystyle K_{v}} and f v = 1 O v {\displaystyle f_{v}=\mathbf {1} _{{\mathcal {O}}_{v}}} is the characteristic function on the ring of integers O v {\displaystyle {\mathcal {O}}_{v}} for all but finitely many v {\displaystyle v} . (For the archimedean places of K {\displaystyle K} , the f v {\displaystyle f_{v}} are just the usual Schwartz functions on R n {\displaystyle \mathbb {R} ^{n}} , while for the non-archimedean places the f v {\displaystyle f_{v}} are the Schwartz–Bruhat functions of non-archimedean local fields.) The space of Schwartz–Bruhat functions on the adeles A K {\displaystyle \mathbb {A} _{K}} is defined to be the restricted tensor product ⨂ v ′ S ( K v ) := lim → E ⁡ ( ⨂ v ∈ E S ( K v ) ) {\displaystyle \bigotimes _{v}'{\mathcal {S}}(K_{v}):=\varinjlim _{E}\left(\bigotimes _{v\in E}{\mathcal {S}}(K_{v})\right)} of Schwartz–Bruhat spaces S ( K v ) {\displaystyle {\mathcal {S}}(K_{v})} of local fields, where E {\displaystyle E} is a finite set of places of K {\displaystyle K} . The elements of this space are of the form f = ⊗ v f v {\displaystyle f=\otimes _{v}f_{v}} , where f v ∈ S ( K v ) {\displaystyle f_{v}\in {\mathcal {S}}(K_{v})} for all v {\displaystyle v} and f v | O v = 1 {\displaystyle f_{v}|_{{\mathcal {O}}_{v}}=1} for all but finitely many v {\displaystyle v} . For each x = ( x v ) v ∈ A K {\displaystyle x=(x_{v})_{v}\in \mathbb {A} _{K}} we can write f ( x ) = ∏ v f v ( x v ) {\displaystyle f(x)=\prod _{v}f_{v}(x_{v})} , which is finite and thus is well defined. == Examples == Every Schwartz–Bruhat function f ∈ S ( Q p ) {\displaystyle f\in {\mathcal {S}}(\mathbb {Q} _{p})} can be written as f = ∑ i = 1 n c i 1 a i + p k i Z p {\displaystyle f=\sum _{i=1}^{n}c_{i}\mathbf {1} _{a_{i}+p^{k_{i}}\mathbb {Z} _{p}}} , where each a i ∈ Q p {\displaystyle a_{i}\in \mathbb {Q} _{p}} , k i ∈ Z {\displaystyle k_{i}\in \mathbb {Z} } , and c i ∈ C {\displaystyle c_{i}\in \mathbb {C} } . This can be seen by observing that Q p {\displaystyle \mathbb {Q} _{p}} being a local field implies that f {\displaystyle f} by definition has compact support, i.e., supp ⁡ ( f ) {\displaystyle \operatorname {supp} (f)} has a finite subcover. Since every open set in Q p {\displaystyle \mathbb {Q} _{p}} can be expressed as a disjoint union of open balls of the form a + p k Z p {\displaystyle a+p^{k}\mathbb {Z} _{p}} (for some a ∈ Q p {\displaystyle a\in \mathbb {Q} _{p}} and k ∈ Z {\displaystyle k\in \mathbb {Z} } ) we have supp ⁡ ( f ) = ∐ i = 1 n ( a i + p k i Z p ) {\displaystyle \operatorname {supp} (f)=\coprod _{i=1}^{n}(a_{i}+p^{k_{i}}\mathbb {Z} _{p})} . The function f {\displaystyle f} must also be locally constant, so f | a i + p k i Z p = c i 1 a i + p k i Z p {\displaystyle f|_{a_{i}+p^{k_{i}}\mathbb {Z} _{p}}=c_{i}\mathbf {1} _{a_{i}+p^{k_{i}}\mathbb {Z} _{p}}} for some c i ∈ C {\displaystyle c_{i}\in \mathbb {C} } . (As for f {\displaystyle f} evaluated at zero, f ( 0 ) 1 Z p {\displaystyle f(0)\mathbf {1} _{\mathbb {Z} _{p}}} is always included as a term.) On the rational adeles A Q {\displaystyle \mathbb {A} _{\mathbb {Q} }} all functions in the Schwartz–Bruhat space S ( A Q ) {\displaystyle {\mathcal {S}}(\mathbb {A} _{\mathbb {Q} })} are finite linear combinations of ∏ p ≤ ∞ f p = f ∞ × ∏ p < ∞ f p {\displaystyle \prod _{p\leq \infty }f_{p}=f_{\infty }\times \prod _{p<\infty }f_{p}} over all rational primes p {\displaystyle p} , where f ∞ ∈ S ( R ) {\displaystyle f_{\infty }\in {\mathcal {S}}(\mathbb {R} )} , f p ∈ S ( Q p ) {\displaystyle f_{p}\in {\mathcal {S}}(\mathbb {Q} _{p})} , and f p = 1 Z p {\displaystyle f_{p}=\mathbf {1} _{\mathbb {Z} _{p}}} for all but finitely many p {\displaystyle p} . The sets Q p {\displaystyle \mathbb {Q} _{p}} and Z p {\displaystyle \mathbb {Z} _{p}} are the field of p-adic numbers and ring of p-adic integers respectively. == Properties == The Fourier transform of a Schwartz–Bruhat function on a locally compact abelian group is a Schwartz–Bruhat function on the Pontryagin dual group. Consequently, the Fourier transform takes tempered distributions on such a group to tempered distributions on the dual group. Given the (additive) Haar measure on A K {\displaystyle \mathbb {A} _{K}} the Schwartz–Bruhat space S ( A K ) {\displaystyle {\mathcal {S}}(\mathbb {A} _{K})} is dense in the space L 2 ( A K , d x ) . {\displaystyle L^{2}(\mathbb {A} _{K},dx).} == Applications == In algebraic number theory, the Schwartz–Bruhat functions on the adeles can be used to give an adelic version of the Poisson summation formula from analysis, i.e., for every f ∈ S ( A K ) {\displaystyle f\in {\mathcal {S}}(\mathbb {A} _{K})} one has ∑ x ∈ K f ( a x ) = 1 | a | ∑ x ∈ K f ^ ( a − 1 x ) {\displaystyle \sum _{x\in K}f(ax)={\frac {1}{|a|}}\sum _{x\in K}{\hat {f}}(a^{-1}x)} , where a ∈ A K × {\displaystyle a\in \mathbb {A} _{K}^{\times }} . John Tate developed this formula in his doctoral thesis to prove a more general version of the functional equation for the Riemann zeta function. This involves giving the zeta function of a number field an integral representation in which the integral of a Schwartz–Bruhat function, chosen as a test function, is twisted by a certain character and is integrated over A K × {\displaystyle \mathbb {A} _{K}^{\times }} with respect to the multiplicative Haar measure of this group. This allows one to apply analytic methods to study zeta functions through these zeta integrals. == References == Osborne, M. Scott (1975). "On the Schwartz–Bruhat space and the Paley-Wiener theorem for locally compact abelian groups". Journal of Functional Analysis. 19: 40–49. doi:10.1016/0022-1236(75)90005-1. Gelfand, I. M.; et al. (1990). Representation Theory and Automorphic Functions. Boston: Academic Press. ISBN 0-12-279506-7. Bump, Daniel (1998). Automorphic Forms and Representations. Cambridge: Cambridge University Press. ISBN 978-0521658188. Deitmar, Anton (2012). Automorphic Forms. Berlin: Springer-Verlag London. ISBN 978-1-4471-4434-2. ISSN 0172-5939. Ramakrishnan, V.; Valenza, R. J. (1999). Fourier Analysis on Number Fields. New York: Springer-Verlag. ISBN 978-0387984360. Tate, John T. (1950), "Fourier analysis in number fields, and Hecke's zeta-functions", Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), Thompson, Washington, D.C., pp. 305–347, ISBN 978-0-9502734-2-6, MR 0217026 {{citation}}: ISBN / Date incompatibility (help)
Wikipedia/Schwartz–Bruhat_function
Theoretical and Mathematical Physics (Russian: Теоретическая и Математическая Физика) is a Russian scientific journal. It was founded in 1969 by Nikolai Bogolubov. Currently handled by the Russian Academy of Sciences, it appears in 12 issues per year. The journal publishes papers on mathematical aspects of quantum mechanics, quantum field theory, statistical physics, supersymmetry, and integrable models (in any areas of physics). The editor-in-chief is Dmitri I. Kazakov (Institute for Nuclear Research). According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.0. == References == == External links == Official website Access to the publications Russian version of the Journal
Wikipedia/Theoretical_and_Mathematical_Physics
In geometry, the lune of Hippocrates, named after Hippocrates of Chios, is a lune bounded by arcs of two circles, the smaller of which has as its diameter a chord spanning a right angle on the larger circle. Equivalently, it is a non-convex plane region bounded by one 180-degree circular arc and one 90-degree circular arc. It was the first curved figure to have its exact area calculated mathematically. == History == Hippocrates wanted to solve the classic problem of squaring the circle, i.e. constructing a square by means of straightedge and compass, having the same area as a given circle. He proved that the lune bounded by the arcs labeled E and F in the figure has the same area as triangle ABO. This afforded some hope of solving the circle-squaring problem, since the lune is bounded only by arcs of circles. Heath concludes that, in proving his result, Hippocrates was also the first to prove that the area of a circle is proportional to the square of its diameter. Hippocrates' book on geometry in which this result appears, Elements, has been lost, but may have formed the model for Euclid's Elements. Hippocrates' proof was preserved through the History of Geometry compiled by Eudemus of Rhodes, which has also not survived, but which was excerpted by Simplicius of Cilicia in his commentary on Aristotle's Physics. Not until 1882, with Ferdinand von Lindemann's proof of the transcendence of π, was squaring the circle proved to be impossible. == Proof == Hippocrates' result can be proved as follows: The center of the circle on which the arc AEB lies is the point D, which is the midpoint of the hypotenuse of the isosceles right triangle ABO. Therefore, the diameter AC of the larger circle ABC is ⁠ 2 {\displaystyle {\sqrt {2}}} ⁠ times the diameter of the smaller circle on which the arc AEB lies. Consequently, the smaller circle has half the area of the larger circle, and therefore the quarter circle AFBOA is equal in area to the semicircle AEBDA. Subtracting the crescent-shaped area AFBDA from the quarter circle gives triangle ABO and subtracting the same crescent from the semicircle gives the lune. Since the triangle and lune are both formed by subtracting equal areas from equal area, they are themselves equal in area. == Generalizations == Using a similar proof to the one above, the Arab mathematician Hasan Ibn al-Haytham (Latinized name Alhazen, c. 965 – c. 1040) showed that where two lunes are formed, on the two sides of a right triangle, whose outer boundaries are semicircles and whose inner boundaries are formed by the circumcircle of the triangle, then the areas of these two lunes added together are equal to the area of the triangle. The lunes formed in this way from a right triangle are known as the lunes of Alhazen. The quadrature of the lune of Hippocrates is the special case of this result for an isosceles right triangle. All lunes constructable by compass and straight-edge can be specified by the two angles formed by the inner and outer arcs on their respective circles; in this notation, for instance, the lune of Hippocrates would have the inner and outer angles (90°, 180°) with ratio 1:2. Hippocrates found two other squarable concave lunes, with angles approximately (107.2°, 160.9°) with ratio 2:3 and (68.5°, 205.6°) with ratio 1:3. Two more squarable concave lunes, with angles approximately (46.9°, 234.4°) with ratio 1:5 and (100.8°, 168.0°) with ratio 3:5 were found in 1766 by Martin Johan Wallenius and again in 1840 by Thomas Clausen. In the mid-20th century, two Russian mathematicians, Nikolai Chebotaryov and his student Anatoly Dorodnov, completely classified the lunes that are constructible by compass and straightedge and that have equal area to a given square. As Chebotaryov and Dorodnov showed, these five pairs of angles give the only constructible squarable lunes; in particular, there are no other constructible squarable lunes. == References ==
Wikipedia/Lune_of_Hippocrates
Xenocrates (; Greek: Ξενοκράτης; c. 396/5 – 314/3 BC) of Chalcedon was a Greek philosopher, mathematician, and leader (scholarch) of the Platonic Academy from 339/8 to 314/3 BC. His teachings followed those of Plato, which he attempted to define more closely, often with mathematical elements. He distinguished three forms of being: the sensible, the intelligible, and a third compounded of the two, to which correspond respectively, sense, intellect and opinion. He considered unity and duality to be gods which rule the universe, and the soul a self-moving number. God pervades all things, and there are daemonical powers, intermediate between the divine and the mortal, which consist in conditions of the soul. He held that mathematical objects and the Platonic Ideas are identical, unlike Plato who distinguished them. In ethics, he taught that virtue produces happiness, but external goods can minister to it and enable it to effect its purpose. == Life == Xenocrates was a native of Chalcedon. By the most probable calculation he was born 396/5 BC, and died 314/3 BC at the age of 82. His father was named Agathon (Ancient Greek: Ἀγάθων) or Agathanor (Ancient Greek: Ἀγαθάνωρ). Moving to Athens in early youth, he became the pupil of Aeschines Socraticus, but subsequently joined himself to Plato, whom he accompanied to Sicily in 361. Upon his master's death, he paid a visit with Aristotle to Hermias of Atarneus. In 339/8 BC, Xenocrates succeeded Speusippus in the presidency of the school, defeating his competitors Menedemus of Pyrrha and Heraclides Ponticus by a few votes. On three occasions he was member of an Athenian legation, once to Philip, twice to Antipater. Xenocrates resented the Macedonian influence then dominant at Athens. Soon after the death of Demosthenes (c. 322 BC), he declined the citizenship offered to him at the insistence of Phocion as a reward for his services in negotiating peace with Antipater after Athens' unsuccessful rebellion. The settlement was reached "at the price of a constitutional change: thousands of poor Athenians were disenfranchised," and Xenocrates said "that he did not want to become a citizen within a constitution he had struggled to prevent". Being unable to pay the tax levied upon resident aliens, he is said to have been saved only by the courage of the orator Lycurgus, or even to have been bought by Demetrius Phalereus, and then emancipated. In 314/3, he died from hitting his head, after tripping over a bronze pot in his house. Xenocrates was succeeded as scholarch by Polemon, whom he had reclaimed from a life of profligacy. Besides Polemon, the statesman Phocion, Chaeron (tyrant of Pellene), the academic Crantor, the Stoic Zeno and Epicurus are said to have frequented his lectures. Wanting in quickness of apprehension and natural grace he compensated by persevering and thorough-going industry, pure benevolence, purity of morals, unselfishness, and a moral earnestness, which compelled esteem and trust even from the Athenians of his own age. Xenocrates adhered closely to the Platonist doctrine, and he is accounted the typical representative of the Old Academy. In his writings, which were numerous, he seems to have covered nearly the whole of the Academic program; but metaphysics and ethics were the subjects which principally engaged his thoughts. He is said to have made more explicit the division of philosophy into the three parts of Physics, Dialectic and Ethics. == Writings == With a comprehensive work on Dialectic (τῆς περὶ τὸ διαλέγεσθαι πραγματείας βιβλία ιδ΄) there were also separate treatises On Knowledge, On Knowledgibility (περὶ ἐπιστήμης α΄, περὶ ἐπιστημοσύνης α΄), On Divisions (διαιρέσεις η΄), On Genera and Species (περὶ γενῶν καὶ εἰδῶν α΄), On Ideas (περὶ ἰδεῶν), On the Opposite (περὶ τοῦ ἐναντίου), and others, to which probably the work On Mediate Thought (τῶν περὶ τὴν διάνοιαν η΄) also belonged. Two works by Xenocrates on Physics are mentioned (περὶ φύσεως ϛ΄ - φυσικῆς ἀκροάσεως ϛ΄), as are also books On the Gods (περὶ Θεῶν β΄), On the Existent (περὶ τοῦ ὄντος), On the One (περὶ τοῦ ἑνός), On the Indefinite (περὶ τοῦ ἀορίστου), On the Soul (περὶ ψυχῆς), On the Emotions (περὶ τῶν παθῶν α΄) On Memory (περὶ μνήμης), etc. In like manner, with the more general Ethical treatises On Happiness (περὶ εὐδαιμονίας β΄), and On Virtue (περὶ ἀρετῆς) there were connected separate books on individual Virtues, on the Voluntary, etc. His four books on Royalty he had addressed to Alexander (στοιχεῖα πρὸς Ἀλέξανδρον περὶ βασιλείας δ΄). Besides these he had written treatises On the State (περὶ πολιτείας α΄; πολιτικός α΄), On the Power of Law (περὶ δυνάμεως νόμου α΄), etc., as well as upon Geometry, Arithmetic, and Astrology. Besides philosophical treatises, he wrote poetry (epē) and paraenesis. == Philosophy == === Epistemology === Xenocrates made a more definite division between the three departments of philosophy, than Speusippus, but at the same time abandoned Plato's heuristic method of conducting through doubts (aporiai), and adopted instead a mode of bringing forward his doctrines in which they were developed dogmatically. Xenocrates recognized three grades of cognition, each appropriated to a region of its own: knowledge, sensation, and opinion. He referred knowledge (episteme) to that essence which is the object of pure thought, and is not included in the phenomenal world; sensation (aisthesis) to that which passes into the world of phenomena; opinion (doxa) to that essence which is at once the object of sensuous perception, and, mathematically, of pure reason - the essence of heaven or the stars; so that he conceived of doxa in a higher sense, and endeavored, more definitely than Plato, to exhibit mathematics as mediating between knowledge and sensuous perception. All three modes of apprehension partake of truth; but in what manner scientific perception (epistemonike aisthesis) did so, we unfortunately do not learn. Even here Xenocrates's preference for symbolic modes of sensualising or denoting appears: he connected the above three stages of knowledge with the three Fates: Atropos, Clotho, and Lachesis. We know nothing further about the mode in which Xenocrates carried out his dialectic, as it is probable that what was peculiar to Aristotelian logic did not remain unnoticed in it, for it can hardly be doubted that the division of the existent into the absolutely existent, and the relatively existent, attributed to Xenocrates, was opposed to the Aristotelian table of categories. === Metaphysics === We know from Plutarch that Xenocrates, if he did not explain the Platonic construction of the world-soul as Crantor after him did, nevertheless drew heavily on the Timaeus; and further that he was at the head of those who, regarding the universe as unoriginated and imperishable, looked upon the chronological succession in the Platonic theory as a form in which to denote the relations of conceptual succession. Plutarch unfortunately, does not give us any further details, and contented himself with describing the well-known assumption of Xenocrates, that the soul is a self-moving number. Probably we should connect with this the statement that Xenocrates called unity and duality (monas and duas) deities, and characterised the former as the first male existence, ruling in heaven, as father and Zeus, as uneven number and spirit; the latter as female, as the mother of the gods, and as the soul of the universe which reigns over the mutable world under heaven, or, as others have it, that he named the Zeus who ever remains like himself, governing in the sphere of the immutable, the highest; the one who rules over the mutable, sublunary world, the last, or outermost. If, like other Platonists, he designated the material principle as undefined duality, the world-soul was probably described by him as the first defined duality, the conditioning or defining principle of every separate definitude in the sphere of the material and changeable, but not extending beyond it. He appears to have called it in the highest sense the individual soul, in a derivative sense a self-moving number, that is, the first number endowed with motion. To this world-soul Zeus, or the world-spirit, has entrusted - in what degree and in what extent, we do not learn - dominion over that which is liable to motion and change. The divine power of the world-soul is then again represented, in the different spheres of the universe, as infusing soul into the planets, Sun and Moon, - in a purer form, in the shape of Olympic gods. As a sublunary daemonical power (as Hera, Poseidon, Demeter), it dwells in the elements, and these daemonical natures, midway between gods and men, are related to them as the isosceles triangle is to the equilateral and the scalene. The divine world-soul which reigns over the whole domain of sublunary changes he appears to have designated as the last Zeus, the last divine activity. It is not until we get to the sphere of the separate daemonical powers of nature that the opposition between good and evil begins, and the daemonical power is appeased by means of a stubbornness which it finds there congenial to it; the good daemonical power makes happy those in whom it takes up its abode, the bad ruins them; for eudaimonia is the indwelling of a good daemon, the opposite the indwelling of a bad one. How Xenocrates tried to establish and connect scientifically these assumptions, which appear to be taken chiefly from his books on the nature of the gods, we do not learn, and can only discover the one fundamental idea at the basis of them, that all grades of existence are penetrated by divine power, and that this grows less and less energetic in proportion as it descends to the perishable and individual. Hence he also appears to have maintained that as far as consciousness extends, so far also extends an intuition of that all-ruling divine power, of which he represented even irrational animals as partaking. But neither the thick nor the thin, to the different combinations of which he appears to have tried to refer the various grades of material existence, were regarded by him as in themselves partaking of soul; doubtless because he referred them immediately to the divine activity, and was far from attempting to reconcile the duality of the principia, or to resolve them into an original unity. Hence too he was for proving the incorporeality of the soul by the fact that it is not nourished as the body is. It is probable, that, after the example of Plato, he designated the divine principium as alone indivisible, and remaining like itself; the material, as the divisible, partaking of multiformity, and different, and that from the union of the two, or from the limitation of the unlimited by the absolute unity, he deduced number, and for that reason called the soul of the universe, like that of individual beings, a self-moving number, which, by virtue of its twofold root in the same and the different, shares equally in permanence and motion, and attains to consciousness by means of the reconciliation of this opposition. Aristotle, in his Metaphysics, recognized amongst contemporary Platonists three principal views concerning the ideal numbers, and their relation to the ideas and to mathematical numbers: those who, like Plato, distinguished ideal and mathematical numbers; those who, like Xenocrates, identified ideal numbers with mathematical numbers those who, like Speusippus, postulated mathematical numbers only Aristotle has much to say against the Xenocratean interpretation of the theory, and in particular points out that, if the ideal numbers are made up of arithmetical units, they not only cease to be principles, but also become subject to arithmetical operations. In the derivation of things according to the series of the numbers he seems to have gone further than any of his predecessors. He approximated to the Pythagoreans in this, that (as is clear from his explanation of the soul) he regarded number as the conditioning principle of consciousness, and consequently of knowledge also; he thought it necessary, however, to supply what was wanting in the Pythagorean assumption by the more accurate definition, borrowed from Plato, that it is only insofar as number reconciles the opposition between the same and the different, and has raised itself to self-motion, that it is soul. We find a similar attempt at the supplementation of the Platonic doctrine in Xenocrates's assumption of indivisible lines. In them he thought he had discovered what, according to Plato, God alone knows, and he among men who is loved by him, namely, the elements or principia of the Platonic triangles. He seems to have described them as first, original lines, and in a similar sense to have spoken of original plain figures and bodies, convinced that the principia of the existent should be sought not in the material, not in the divisible which attains to the condition of a phenomenon, but merely in the ideal definitude of form. He may very well, in accordance with this, have regarded the point as a merely subjectively admissible presupposition, and a passage of Aristotle respecting this assumption should perhaps be referred to him. === Ethics === The information on his Ethics is scanty. He tried to supplement the Platonic doctrine at various points, and at the same time to give it a more direct applicability to life. He distinguished from the good and the bad something which is neither good nor bad. Following the ideas of his Academic predecessors, he viewed the good as that which should be striven after for itself, that is, which has value in itself, while the bad is the opposite of this. Consequently, that which is neither good nor bad is what in itself is neither to be striven after nor to be avoided, but derives value or the opposite according as it serves as means for what is good or bad, or rather, is used by us for that purpose. While, however, Xenocrates (and with him Speusippus and the other philosophers of the older Academy) would not accept that these intermediate things, such as health, beauty, fame, good fortune, etc. were valuable in themselves, he did not accept that they were absolutely worthless or indifferent. According, therefore, as what belongs to the intermediate region is adapted to bring about or to hinder the good, Xenocrates appears to have designated it as good or evil, probably with the proviso, that by misuse what is good might become evil, and vice versa, that by virtue, what is evil might become good. Still he maintained that virtue alone is valuable in itself, and that the value of every thing else is conditional. According to this, happiness should coincide with the consciousness of virtue, though its reference to the relations of human life requires the additional condition, that it is only in the enjoyment of the good things and circumstances originally designed for it by nature that it attains to completion; to these good things, however, sensuous gratification does not belong. In this sense he on the one hand denoted (perfect) happiness as the possession of personal virtue, and the capabilities adapted to it, and therefore reckoned among its constituent elements, besides moral actions conditions and facilities, those movements and relations also without which external good things cannot be attained, and on the other hand did not allow that wisdom, understood as the science of first causes or intelligible essence, or as theoretical understanding, is by itself the true wisdom which should be striven after by people, and therefore seems to have regarded this human wisdom as at the same time exerted in investigating, defining, and applying. How decidedly he insisted not only on the recognition of the unconditional nature of moral excellence, but on morality of thought, is shown by his declaration, that it comes to the same thing whether one casts longing eyes, or sets one's feet upon the property of others. His moral earnestness is also expressed in the warning that the ears of children should be guarded against the poison of immoral speeches. === Mathematics === Xenocrates is known to have written a book On Numbers, and a Theory of Numbers, besides books on geometry. Plutarch writes that Xenocrates once attempted to find the total number of syllables that could be made from the letters of the alphabet. According to Plutarch, Xenocrates result was 1,002,000,000,000 (a "myriad-and-twenty times a myriad-myriad"). This possibly represents the first instance that a combinatorial problem involving permutations was attempted. Xenocrates also supported the idea of "indivisible lines" (and magnitudes) in order to counter Zeno's paradoxes. == See also == On Indivisible Lines == References == === Bibliography === Attribution This article incorporates text from a publication now in the public domain: Smith, William, ed. (1870). Dictionary of Greek and Roman Biography and Mythology. {{cite encyclopedia}}: Missing or empty |title= (help) == External links == Dancy, Russell. "Xenocrates". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. University of St Andrews, Scotland, Biography of Xenocrates
Wikipedia/Xenocrates
Quantum mechanics is the fundamental physical theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms.: 1.1  It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. == Overview and fundamental concepts == Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and subatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.: 67–87  One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.: 427–435  Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.: 102–111 : 1.1–1.8  The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).: 109  However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. == Mathematical formulation == In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ {\displaystyle \psi } belonging to a (separable) complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, ψ {\displaystyle \psi } and e i α ψ {\displaystyle e^{i\alpha }\psi } represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L 2 ( C ) {\displaystyle L^{2}(\mathbb {C} )} , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ → , ψ ⟩ | 2 {\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}} , where λ → {\displaystyle {\vec {\lambda }}} is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ , P λ ψ ⟩ {\displaystyle \langle \psi ,P_{\lambda }\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result λ {\displaystyle \lambda } was obtained, the quantum state is postulated to collapse to λ → {\displaystyle {\vec {\lambda }}} , in the non-degenerate case, or to P λ ψ / ⟨ ψ , P λ ψ ⟩ {\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}} , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). === Time evolution of a quantum state === The time evolution of a quantum state is described by the Schrödinger equation: i ℏ ∂ ∂ t ψ ( t ) = H ψ ( t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).} Here H {\displaystyle H} denotes the Hamiltonian, the observable corresponding to the total energy of the system, and ℏ {\displaystyle \hbar } is the reduced Planck constant. The constant i ℏ {\displaystyle i\hbar } is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by ψ ( t ) = e − i H t / ℏ ψ ( 0 ) . {\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).} The operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state ψ ( 0 ) {\displaystyle \psi (0)} – it makes a definite prediction of what the quantum state ψ ( t ) {\displaystyle \psi (t)} will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian.: 133–137  Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy.: 793  Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.: 849  === Uncertainty principle === One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator X ^ {\displaystyle {\hat {X}}} and momentum operator P ^ {\displaystyle {\hat {P}}} do not commute, but rather satisfy the canonical commutation relation: [ X ^ , P ^ ] = i ℏ . {\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .} Given a quantum state, the Born rule lets us compute expectation values for both X {\displaystyle X} and P {\displaystyle P} , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have σ X = ⟨ X 2 ⟩ − ⟨ X ⟩ 2 , {\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},} and likewise for the momentum: σ P = ⟨ P 2 ⟩ − ⟨ P ⟩ 2 . {\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.} The uncertainty principle states that σ X σ P ≥ ℏ 2 . {\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.} Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators A {\displaystyle A} and B {\displaystyle B} . The commutator of these two operators is [ A , B ] = A B − B A , {\displaystyle [A,B]=AB-BA,} and this provides the lower bound on the product of standard deviations: σ A σ B ≥ 1 2 | ⟨ [ A , B ] ⟩ | . {\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an i / ℏ {\displaystyle i/\hbar } factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum p i {\displaystyle p_{i}} is replaced by − i ℏ ∂ ∂ x {\displaystyle -i\hbar {\frac {\partial }{\partial x}}} , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times − ℏ 2 {\displaystyle -\hbar ^{2}} . === Composite systems and entanglement === When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces H A {\displaystyle {\mathcal {H}}_{A}} and H B {\displaystyle {\mathcal {H}}_{B}} , respectively. The Hilbert space of the composite system is then H A B = H A ⊗ H B . {\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.} If the state for the first system is the vector ψ A {\displaystyle \psi _{A}} and the state for the second system is ψ B {\displaystyle \psi _{B}} , then the state of the composite system is ψ A ⊗ ψ B . {\displaystyle \psi _{A}\otimes \psi _{B}.} Not all states in the joint Hilbert space H A B {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if ψ A {\displaystyle \psi _{A}} and ϕ A {\displaystyle \phi _{A}} are both possible states for system A {\displaystyle A} , and likewise ψ B {\displaystyle \psi _{B}} and ϕ B {\displaystyle \phi _{B}} are both possible states for system B {\displaystyle B} , then 1 2 ( ψ A ⊗ ψ B + ϕ A ⊗ ϕ B ) {\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)} is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. === Equivalence between formulations === There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. === Symmetries and conservation laws === The Hamiltonian H {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} for each value of t {\displaystyle t} . From this relation between U ( t ) {\displaystyle U(t)} and H {\displaystyle H} , it follows that any observable A {\displaystyle A} that commutes with H {\displaystyle H} will be conserved: its expectation value will not change over time.: 471  This statement generalizes, as mathematically, any Hermitian operator A {\displaystyle A} can generate a family of unitary operators parameterized by a variable t {\displaystyle t} . Under the evolution generated by A {\displaystyle A} , any observable B {\displaystyle B} that commutes with A {\displaystyle A} will be conserved. Moreover, if B {\displaystyle B} is conserved by evolution under A {\displaystyle A} , then A {\displaystyle A} is conserved under the evolution generated by B {\displaystyle B} . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. == Examples == === Free particle === The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: H = 1 2 m P 2 = − ℏ 2 2 m d 2 d x 2 . {\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.} The general solution of the Schrödinger equation is given by ψ ( x , t ) = 1 2 π ∫ − ∞ ∞ ψ ^ ( k , 0 ) e i ( k x − ℏ k 2 2 m t ) d k , {\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,} which is a superposition of all possible plane waves e i ( k x − ℏ k 2 2 m t ) {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}} , which are eigenstates of the momentum operator with momentum p = ℏ k {\displaystyle p=\hbar k} . The coefficients of the superposition are ψ ^ ( k , 0 ) {\displaystyle {\hat {\psi }}(k,0)} , which is the Fourier transform of the initial quantum state ψ ( x , 0 ) {\displaystyle \psi (x,0)} . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: ψ ( x , 0 ) = 1 π a 4 e − x 2 2 a {\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}} which has Fourier transform, and therefore momentum distribution ψ ^ ( k , 0 ) = a π 4 e − a k 2 2 . {\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.} We see that as we make a {\displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making a {\displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === As in the classical case, the potential for the quantum harmonic oscillator is given by: 234  V ( x ) = 1 2 m ω 2 x 2 . {\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by ψ n ( x ) = 1 2 n n ! ⋅ ( m ω π ℏ ) 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad } n = 0 , 1 , 2 , … . {\displaystyle n=0,1,2,\ldots .} where Hn are the Hermite polynomials H n ( x ) = ( − 1 ) n e x 2 d n d x n ( e − x 2 ) , {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),} and the corresponding energy levels are E n = ℏ ω ( n + 1 2 ) . {\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy for bound states. === Mach–Zehnder interferometer === The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector ψ ∈ C 2 {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the "lower" path ψ l = ( 1 0 ) {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the "upper" path ψ u = ( 0 1 ) {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}} , that is, ψ = α ψ l + β ψ u {\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}} for complex α , β {\displaystyle \alpha ,\beta } . In order to respect the postulate that ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} we require that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . Both beam splitters are modelled as the unitary matrix B = 1 2 ( 1 i i 1 ) {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}} , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of 1 / 2 {\displaystyle 1/{\sqrt {2}}} , or be reflected to the other path with a probability amplitude of i / 2 {\displaystyle i/{\sqrt {2}}} . The phase shifter on the upper arm is modelled as the unitary matrix P = ( 1 0 0 e i Δ Φ ) {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}} , which means that if the photon is on the "upper" path it will gain a relative phase of Δ Φ {\displaystyle \Delta \Phi } , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter B {\displaystyle B} , a phase shifter P {\displaystyle P} , and another beam splitter B {\displaystyle B} , and so end up in the state B P B ψ l = i e i Δ Φ / 2 ( − sin ⁡ ( Δ Φ / 2 ) cos ⁡ ( Δ Φ / 2 ) ) , {\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},} and the probabilities that it will be detected at the right or at the top are given respectively by p ( u ) = | ⟨ ψ u , B P B ψ l ⟩ | 2 = cos 2 ⁡ Δ Φ 2 , {\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},} p ( l ) = | ⟨ ψ l , B P B ψ l ⟩ | 2 = sin 2 ⁡ Δ Φ 2 . {\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.} One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by p ( u ) = p ( l ) = 1 / 2 {\displaystyle p(u)=p(l)=1/2} , independently of the phase Δ Φ {\displaystyle \Delta \Phi } . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. == Applications == Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. == Relation to other scientific theories == === Classical mechanics === The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.: 299  When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.: 234  Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.: 353  Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.: 687–730  Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. === Special relativity and electrodynamics === Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical − e 2 / ( 4 π ϵ 0 r ) {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential.: 285  Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.: 26  This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. === Relation to general relativity === Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. == Philosophical implications == Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. == History == Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word "atom" deriving from the Greek for 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): E = h ν {\displaystyle E=h\nu \ } , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids. == See also == == Explanatory notes == == References == == Further reading == == External links == Introduction to Quantum Theory at Quantiki. Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe. Course material Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware. Modern Physics: With waves, thermodynamics, and optics – an online textbook. MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06. ⁠5+1/2⁠ Examples in Quantum Mechanics. Philosophy Ismael, Jenann. "Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Philosophical Issues in Quantum Theory". Stanford Encyclopedia of Philosophy.
Wikipedia/Quantum_Mechanics
In the history of calculus, the calculus controversy (German: Prioritätsstreit, lit. 'priority dispute') was an argument between mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first discovered calculus. The question was a major intellectual controversy, beginning in 1699 and reaching its peak in 1712. Leibniz had published his work on calculus first, but Newton's supporters accused Leibniz of plagiarizing Newton's unpublished ideas. The modern consensus is that the two men independently developed their ideas. Their creation of calculus has been called "the greatest advance in mathematics that had taken place since the time of Archimedes." Newton stated he had begun working on a form of calculus (which he called "The Method of Fluxions and Infinite Series") in 1666, at the age of 23, but did not publish it until 1737 as a minor annotation in the back of one of his works decades later (a relevant Newton manuscript of October 1666 is now published among his mathematical papers). Gottfried Leibniz began working on his variant of calculus in 1674, and in 1684 published his first paper employing it, "Nova Methodus pro Maximis et Minimis". L'Hôpital published a text on Leibniz's calculus in 1696 (in which he recognized that Newton's Principia of 1687 was "nearly all about this calculus"). Meanwhile, Newton, though he explained his (geometrical) form of calculus in Section I of Book I of the Principia of 1687, did not explain his eventual fluxional notation for the calculus in print until 1693 (in part) and 1704 (in full). The prevailing opinion in the 18th century was against Leibniz (in Britain, not in the German-speaking world). Today, the consensus is Leibniz and Newton independently invented and described calculus in Europe in the 17th century, with their work noted to be more than just a "synthesis of previously distinct pieces of mathematical technique, but it was certainly this in part". It was certainly Isaac Newton who first devised a new infinitesimal calculus and elaborated it into a widely extensible algorithm, whose potentialities he fully understood; of equal certainty, differential and integral calculus, the fount of great developments flowing continuously from 1684 to the present day, was created independently by Gottfried Leibniz. One author has identified the dispute as being about "profoundly different" methods: Despite ... points of resemblance, the methods [of Newton and Leibniz] are profoundly different, so making the priority row a nonsense. On the other hand, other authors have emphasized the equivalences and mutual translatability of the methods: here N Guicciardini (2003) appears to confirm L'Hôpital (1696) (already cited): the Newtonian and Leibnizian schools shared a common mathematical method. They adopted two algorithms, the analytical method of fluxions, and the differential and integral calculus, which were translatable one into the other. == Scientific priority in the 17th century == In the 17th century the question of scientific priority was of great importance to scientists; however, during this period, scientific journals had just begun to appear, and the generally accepted mechanism for fixing priority when publishing information about discoveries had not yet been formed. Among the methods used by scientists were anagrams, sealed envelopes placed in a safe place, correspondence with other scientists, or a private message. A letter to the founder of the French Academy of Sciences, Marin Mersenne for a French scientist, or to the secretary of the Royal Society of London, Henry Oldenburg for English, had essentially the status of a published article. The discoverer could "time-stamp" the moment of his discovery, and prove that he knew of it at the point the letter was sealed, and had not copied it from anything subsequently published; nevertheless, where an idea was subsequently published in conjunction with its use in a particularly valuable context, this might take priority over an earlier discoverer's work, which had no obvious application. Further, a mathematician's claim could be undermined by counter-claims that he had not truly invented an idea, but merely improved on someone else's idea, an improvement that required little skill, and was based on facts that were already known. A series of high-profile disputes about the scientific priority of the 17th century—the era that the American science historian D. Meli called "the golden age of the mud-slinging priority disputes"—is associated with Leibniz. The first of them occurred at the beginning of 1673, during his first visit to London, when in the presence of the famous mathematician John Pell he presented his method of approximating series by differences. To Pell's remark this discovery had already been made by François Regnaud and published in 1670 in Lyon by Gabriel Mouton, Leibniz answered the next day. In a letter to Oldenburg, he wrote that, having looked at Mouton's book, Pell was correct, but he can provide his draft notes, which contain nuances not found by Renault and Mouton. Thus, the integrity of Leibniz was proved, but in this case, was recalled later. On the same visit to London, Leibniz was found in the opposite position. February 1, 1673, at a meeting of the Royal Society of London, he demonstrated his mechanical calculator. The curator of the experiments of the Society, Robert Hooke, carefully examined the device and even removed the back cover. A few days later, in the absence of Leibniz, Hooke criticized the German scientist's machine, saying that he could make a simpler model. Leibniz, who learned about this, returned to Paris and categorically rejected Hooke's claim in a letter to Oldenburg and formulated principles of correct scientific behaviour: "We know that respectable and modest people prefer it when they think of something that is consistent with what someone's done other discoveries, ascribe their own improvements and additions to the discoverer, so as not to arouse suspicions of intellectual dishonesty, and the desire for true generosity should pursue them, instead of the lying thirst for dishonest profit." To illustrate the proper behaviour, Leibniz gives an example of Nicolas-Claude Fabri de Peiresc and Pierre Gassendi, who performed astronomical observations similar to those made earlier by Galileo Galilei and Johannes Hevelius, respectively. Learning they did not make their discoveries first, the French scientists passed on their data to the discoverers. Newton's approach to the priority problem can be illustrated by the example of the discovery of the inverse-square law as applied to the dynamics of bodies moving under the influence of gravity. Based on an analysis of Kepler's laws and his own calculations, Robert Hooke made the assumption that motion under such conditions should occur along orbits similar to elliptical. Unable to rigorously prove this claim, he reported it to Newton. Without further entering into correspondence with Hooke, Newton solved this problem, as well as the inverse to it, proving that the law of inverse-squares follows from the ellipticity of the orbits. This discovery was set forth in his famous work Philosophiæ Naturalis Principia Mathematica without mentioning Hooke. At the insistence of astronomer Edmund Halley, to whom the manuscript was handed over for editing and publication, the phrase was included in the text that the compliance of Kepler's first law with the law of inverse squares was "independently approved by Wren, Hooke and Halley." According to the remark of Vladimir Arnold, Newton, choosing between refusal to publish his discoveries and constant struggle for priority, chose both of them. == Background == === Invention of Differential and Integral Calculus === By the time of Newton and Leibniz, European mathematicians had already made a significant contribution to the formation of the ideas of mathematical analysis. The Dutchman Simon Stevin (1548–1620), the Italian Luca Valerio (1553–1618), the German Johannes Kepler (1571–1630) were engaged in the development of the ancient "method of exhaustion" for calculating areas and volumes. The latter's ideas, apparently, influenced – directly or through Galileo Galilei – on the "method of indivisibles" developed by Bonaventura Cavalieri (1598–1647). The last years of Leibniz's life, 1710–1716, were embittered by a long controversy with John Keill, Newton, and others, over whether Leibniz had discovered calculus independently of Newton, or whether he had merely invented another notation for ideas that were fundamentally Newton's. No participant doubted that Newton had already developed his method of fluxions when Leibniz began working on the differential calculus, yet there was seemingly no proof beyond Newton's word. He had published a calculation of a tangent with the note: "This is only a special case of a general method whereby I can calculate curves and determine maxima, minima, and centers of gravity." How this was done he explained to a pupil a 20 years later, when Leibniz's articles were already well-read. Newton's manuscripts came to light only after his death. The infinitesimal calculus can be expressed either in the notation of fluxions or in that of differentials, or, as noted above, it was also expressed by Newton in geometrical form, as in the Principia of 1687. Newton employed fluxions as early as 1666, but did not publish an account of his notation until 1693. The earliest use of differentials in Leibniz's notebooks may be traced to 1675. He employed this notation in a 1677 letter to Newton. The differential notation also appeared in Leibniz's memoir of 1684. The claim that Leibniz invented the calculus independently of Newton rests on the basis that Leibniz: Published a description of his method some years before Newton printed anything on fluxions, Always alluded to the discovery as being his own invention (this statement went unchallenged for some years), Enjoyed the strong presumption that he acted in good faith Demonstrated in his private papers his development of the ideas of calculus in a manner independent of the path taken by Newton. According to Leibniz's detractors, the fact that Leibniz's claim went unchallenged for some years is immaterial. To rebut this case it is sufficient to show that he: Saw some of Newton's papers on the subject in or before 1675 or at least 1677, and Obtained the fundamental ideas of the calculus from those papers. No attempt was made to rebut #4, which was not known at the time, but which provides the strongest of the evidence that Leibniz came to the calculus independently from Newton. This evidence, however, is still questionable based on the discovery, in the inquest and after, that Leibniz both back-dated and changed fundamentals of his "original" notes, not only in this intellectual conflict, but in several others. He also published "anonymous" slanders of Newton regarding their controversy which he tried, initially, to claim he was not author of. If good faith is nevertheless assumed, however, Leibniz's notes as presented to the inquest came first to integration, which he saw as a generalization of the summation of infinite series, whereas Newton began from derivatives. However, to view the development of calculus as entirely independent between the work of Newton and Leibniz misses that both had some knowledge of the methods of the other (though Newton did develop most fundamentals before Leibniz began) and worked together on a few aspects, in particular power series, as is shown in a letter to Henry Oldenburg dated 24 October 1676, where Newton remarks that Leibniz had developed a number of methods, one of which was new to him. Both Leibniz and Newton could see the other was far along towards inventing calculus (Leibniz in particular mentions it) but only Leibniz was prodded thereby into publication. That Leibniz saw some of Newton's manuscripts had always been likely. In 1849, C. I. Gerhardt, while going through Leibniz's manuscripts, found extracts from Newton's De Analysi per Equationes Numero Terminorum Infinitas (published in 1704 as part of the De Quadratura Curvarum but also previously circulated among mathematicians starting with Newton giving a copy to Isaac Barrow in 1669 and Barrow sending it to John Collins) in Leibniz's handwriting, the existence of which had been previously unsuspected, along with notes re-expressing the content of these extracts in Leibniz's differential notation. Hence when these extracts were made becomes all-important. It is known that a copy of Newton's manuscript had been sent to Ehrenfried Walther von Tschirnhaus in May 1675, a time when he and Leibniz were collaborating; it is not impossible that these extracts were made then. It is also possible that they may have been made in 1676, when Leibniz discussed analysis by infinite series with Collins and Oldenburg. It is probable that they would have then shown him Newton's manuscript on the subject, a copy of which one or both of them surely possessed. On the other hand, it may be supposed that Leibniz made the extracts from the printed copy in or after 1704. Shortly before his death, Leibniz admitted in a letter to Abbé Antonio Schinella Conti, that in 1676 Collins had shown him some of Newton's papers, but Leibniz also implied that they were of little or no value. Presumably he was referring to Newton's letters of 13 June and 24 October 1676, and to the letter of 10 December 1672, on the method of tangents, extracts from which accompanied the letter of 13 June. Whether Leibniz made use of the manuscript from which he had copied extracts, or whether he had previously invented the calculus, are questions on which no direct evidence is available at present. It is, however, worth noting that the unpublished Portsmouth Papers show that when Newton entered into the dispute in 1711, he picked this manuscript as the one which had likely fallen into Leibniz's hands. At that time there was no direct evidence that Leibniz had seen Newton's manuscript before it was printed in 1704; hence Newton's conjecture was not published. But Gerhardt's discovery of a copy made by Leibniz appears to confirm its accuracy. Those who question Leibniz's good faith allege that to a man of his ability, the manuscript, especially if supplemented by the letter of 10 December 1672, sufficed to give him a clue as to the methods of the calculus. Since Newton's work at issue did employ the fluxional notation, anyone building on that work would have to invent a notation, but some deny this. == Development == The quarrel was a retrospective affair. In 1696, already some years later than the events that became the subject of the quarrel, the position still looked potentially peaceful: Newton and Leibniz had each made limited acknowledgements of the other's work, and L'Hôpital's 1696 book about the calculus from a Leibnizian point of view had also acknowledged Newton's published work of the 1680s as "nearly all about this calculus" ("presque tout de ce calcul"), while expressing preference for the convenience of Leibniz's notation. At first, there was no reason to suspect Leibniz's good faith. In 1699, Nicolas Fatio de Duillier, a Swiss mathematician known for his work on the zodiacal light problem, publicly accused Leibniz of plagiarizing Newton, although he privately had accused Leibniz of plagiarism twice in letters to Christiaan Huygens in 1692. It was not until the 1704 publication of an anonymous review of Newton's tract on quadrature, which implied Newton had borrowed the idea of the fluxional calculus from Leibniz, that any responsible mathematician doubted Leibniz had invented the calculus independently of Newton. With respect to the review of Newton's quadrature work, all admit that there was no justification or authority for the statements made therein, which were rightly attributed to Leibniz. But the subsequent discussion led to a critical examination of the whole question, and doubts emerged: "Had Leibniz derived the fundamental idea of the calculus from Newton?" The case against Leibniz, as it appeared to Newton's friends, was summed up in the Commercium Epistolicum of 1712, which referenced all allegations. This document was thoroughly machined by Newton. No such summary (with facts, dates, and references) of the case for Leibniz was issued by his friends; but Johann Bernoulli attempted to indirectly weaken the evidence by attacking the personal character of Newton in a letter dated 7 June 1713. When pressed for an explanation, Bernoulli most solemnly denied having written the letter. In accepting the denial, Newton added in a private letter to Bernoulli the following remarks, Newton's claimed reasons for why he took part in the controversy. He said, "I have never grasped at fame among foreign nations, but I am very desirous to preserve my character for honesty, which the author of that epistle, as if by the authority of a great judge, had endeavoured to wrest from me. Now that I am old, I have little pleasure in mathematical studies, and I have never tried to propagate my opinions over the world, but I have rather taken care not to involve myself in disputes on account of them." Leibniz explained his silence as follows, in a letter to Conti dated 9 April 1716: In order to respond point by point to all the work published against me, I would have to go into much minutiae that occurred thirty, forty years ago, of which I remember little: I would have to search my old letters, of which many are lost. Moreover, in most cases, I did not keep a copy, and when I did, the copy is buried in a great heap of papers, which I could sort through only with time and patience. I have enjoyed little leisure, being so weighted down of late with occupations of a totally different nature. In any event, a bias favouring Newton tainted the whole affair from the outset. The Royal Society, of which Isaac Newton was president at the time, set up a committee to pronounce on the priority dispute, in response to a letter it had received from Leibniz. That committee never asked Leibniz to give his version of the events. The report of the committee, finding in favour of Newton, was written and published as "Commercium Epistolicum" (mentioned above) by Newton early in 1713. But Leibniz did not see it until the autumn of 1714. === Leibniz's death and end of dispute === Leibniz never agreed to acknowledge Newton's priority in inventing calculus. He attempted to write his own version of the history of differential calculus, but, as in the case of the history of the rulers of Braunschweig, he did not complete the matter. At the end of 1715, Leibniz accepted Johann Bernoulli's offer to organize another mathematical competition, in which different approaches had to prove their worth. This time the problem was taken from the area later called the calculus of variations – it was required to construct a tangent line to a family of curves. A letter was written on 25 November and transmitted in London to Newton through Abate Conti. The problem was formulated in unclear terms, and only later it became evident that it was required to find a general, and not a particular, as Newton understood, solution. After the British side published their decision, Leibniz published his, more general, and, thus, formally won this competition. For his part, Newton stubbornly sought to destroy his opponent. Not having achieved this with the "Report", he continued his research, spending hundreds of hours on it. His next study, entitled "Observations upon the preceding Epistle", was inspired by a letter from Leibniz to Conti in March 1716, which criticized Newton's philosophical views; no new facts were given in this document. == See also == Possibility of transmission of Kerala School results to Europe List of scientific priority disputes == References == This article incorporates text from this source, which is in the public domain: Ball, W. W. Rouse (1908). A Short Account of the History of Mathematics. New York: MacMillan.{{cite book}}: CS1 maint: publisher location (link) == Sources == Арнольд, В. И. (1989). Гюйгенс и Барроу, Ньютон и Гук - Первые шаги математического анализа и теории катастроф. М.: Наука. p. 98. ISBN 5-02-013935-1. Arnold, Vladimir (1990). Huygens and Barrow, Newton and Hooke: Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Translated by Primrose, Eric J.F. Birkhäuser Verlag. ISBN 3-7643-2383-3. W. W. Rouse Ball (1908) A Short Account of the History of Mathematics], 4th ed. Bardi, Jason Socrates (2006). The Calculus Wars: Newton, Leibniz, and the Greatest Mathematical Clash of All Time. New York: Thunder's Mouth Press. ISBN 978-1-56025-992-3. Boyer, C. B. (1949). The History of the Calculus and its conceptual development. Dover Publications, inc. Richard C. Brown (2012) Tangled origins of the Leibnitzian Calculus: A case study of mathematical revolution, World Scientific ISBN 9789814390804 Ivor Grattan-Guinness (1997) The Norton History of the Mathematical Sciences. W W Norton. Hall, A. R. (1980). Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge University Press. p. 356. ISBN 0-521-22732-1. Stephen Hawking (1988) A Brief History of Time From the Big Bang to Black Holes. Bantam Books. Kandaswamy, Anand. The Newton/Leibniz Conflict in Context. Meli, D. B. (1993). Equivalence and Priority: Newton versus Leibniz: Including Leibniz's Unpublished Manuscripts on the Principia. Clarendon Press. p. 318. ISBN 0-19-850143-9. == External links == Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, Reihe VII: Mathematische Schriften, vol. 5: Infinitesimalmathematik 1674-1676, Berlin: Akademie Verlag, 2008, pp. 288–295 ("Analyseos tetragonisticae pars secunda", 29 October 1675) and 321–331 ("Methodi tangentium inversae exempla", 11 November 1675). Gottfried Wilhelm Leibniz, "Nova Methodus pro Maximis et Minimis...", 1684 (Latin original) (English translation) Isaac Newton, "Newton's Waste Book (Part 3) (Normalized Version)": 16 May 1666 entry (The Newton Project) Isaac Newton, "De Analysi per Equationes Numero Terminorum Infinitas (Of the Quadrature of Curves and Analysis by Equations of an Infinite Number of Terms)", in: Sir Isaac Newton's Two Treatises, James Bettenham, 1745.
Wikipedia/Newton-Leibniz_calculus_controversy
In mathematics, the inverse function of a function f (also called the inverse of f) is a function that undoes the operation of f. The inverse of f exists if and only if f is bijective, and if it exists, is denoted by f − 1 . {\displaystyle f^{-1}.} For a function f : X → Y {\displaystyle f\colon X\to Y} , its inverse f − 1 : Y → X {\displaystyle f^{-1}\colon Y\to X} admits an explicit description: it sends each element y ∈ Y {\displaystyle y\in Y} to the unique element x ∈ X {\displaystyle x\in X} such that f(x) = y. As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of f is the function f − 1 : R → R {\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} } defined by f − 1 ( y ) = y + 7 5 . {\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} == Definitions == Let f be a function whose domain is the set X, and whose codomain is the set Y. Then f is invertible if there exists a function g from Y to X such that g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} and f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} . If f is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f, and is usually denoted as f −1, a notation introduced by John Frederick William Herschel in 1813. The function f is invertible if and only if it is bijective. This is because the condition g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} implies that f is injective, and the condition f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} implies that f is surjective. The inverse function f −1 to f can be explicitly described as the function f − 1 ( y ) = ( the unique element x ∈ X such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in X{\text{ such that }}f(x)=y)} . === Inverses and composition === Recall that if f is an invertible function with domain X and codomain Y, then f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}\left(f(x)\right)=x} , for every x ∈ X {\displaystyle x\in X} and f ( f − 1 ( y ) ) = y {\displaystyle f\left(f^{-1}(y)\right)=y} for every y ∈ Y {\displaystyle y\in Y} . Using the composition of functions, this statement can be rewritten to the following equations between functions: f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y , {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y},} where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation f −1. Repeatedly composing a function f: X→X with itself is called iteration. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f. === Notation === While the notation f −1(x) might be misunderstood, (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f. The notation f ⟨ − 1 ⟩ {\displaystyle f^{\langle -1\rangle }} might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin (x), which can be denoted as (sin (x))−1. To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcus). For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ārea). For instance, the inverse of the hyperbolic sine function is typically written as arsinh(x). The expressions like sin−1(x) can still be useful to distinguish the multivalued inverse from the partial inverse: sin − 1 ⁡ ( x ) = { ( − 1 ) n arcsin ⁡ ( x ) + π n : n ∈ Z } {\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}} . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided. == Examples == === Squaring and square root functions === The function f: R → [0,∞) given by f(x) = x2 is not injective because ( − x ) 2 = x 2 {\displaystyle (-x)^{2}=x^{2}} for all x ∈ R {\displaystyle x\in \mathbb {R} } . Therefore, f is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function f : [ 0 , ∞ ) → [ 0 , ∞ ) ; x ↦ x 2 {\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}} with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by x ↦ x {\displaystyle x\mapsto {\sqrt {x}}} . === Standard inverse functions === The following table shows several standard functions and their inverses: === Formula for the inverse === Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse f − 1 {\displaystyle f^{-1}} of an invertible function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } has an explicit description as f − 1 ( y ) = ( the unique element x ∈ R such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in \mathbb {R} {\text{ such that }}f(x)=y)} . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f is the function f ( x ) = ( 2 x + 8 ) 3 {\displaystyle f(x)=(2x+8)^{3}} then to determine f − 1 ( y ) {\displaystyle f^{-1}(y)} for a real number y, one must find the unique real number x such that (2x + 8)3 = y. This equation can be solved: y = ( 2 x + 8 ) 3 y 3 = 2 x + 8 y 3 − 8 = 2 x y 3 − 8 2 = x . {\displaystyle {\begin{aligned}y&=(2x+8)^{3}\\{\sqrt[{3}]{y}}&=2x+8\\{\sqrt[{3}]{y}}-8&=2x\\{\dfrac {{\sqrt[{3}]{y}}-8}{2}}&=x.\end{aligned}}} Thus the inverse function f −1 is given by the formula f − 1 ( y ) = y 3 − 8 2 . {\displaystyle f^{-1}(y)={\frac {{\sqrt[{3}]{y}}-8}{2}}.} Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if f is the function f ( x ) = x − sin ⁡ x , {\displaystyle f(x)=x-\sin x,} then f is a bijection, and therefore possesses an inverse function f −1. The formula for this inverse has an expression as an infinite sum: f − 1 ( y ) = ∑ n = 1 ∞ y n / 3 n ! lim θ → 0 ( d n − 1 d θ n − 1 ( θ θ − sin ⁡ ( θ ) 3 ) n ) . {\displaystyle f^{-1}(y)=\sum _{n=1}^{\infty }{\frac {y^{n/3}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right).} == Properties == Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. === Uniqueness === If an inverse function exists for a given function f, then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by f. === Symmetry === There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X, f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y . {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y}.} This statement is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by ( f − 1 ) − 1 = f . {\displaystyle \left(f^{-1}\right)^{-1}=f.} The inverse of a composition of functions is given by ( g ∘ f ) − 1 = f − 1 ∘ g − 1 . {\displaystyle (g\circ f)^{-1}=f^{-1}\circ g^{-1}.} Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f. For example, let f(x) = 3x and let g(x) = x + 5. Then the composition g ∘ f is the function that first multiplies by three and then adds five, ( g ∘ f ) ( x ) = 3 x + 5. {\displaystyle (g\circ f)(x)=3x+5.} To reverse this process, we must first subtract five, and then divide by three, ( g ∘ f ) − 1 ( x ) = 1 3 ( x − 5 ) . {\displaystyle (g\circ f)^{-1}(x)={\tfrac {1}{3}}(x-5).} This is the composition (f −1 ∘ g −1)(x). === Self-inverses === If X is a set, then the identity function on X is its own inverse: id X − 1 = id X . {\displaystyle {\operatorname {id} _{X}}^{-1}=\operatorname {id} _{X}.} More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. Such a function is called an involution. === Graph of the inverse === If f is invertible, then the graph of the function y = f − 1 ( x ) {\displaystyle y=f^{-1}(x)} is the same as the graph of the equation x = f ( y ) . {\displaystyle x=f(y).} This is identical to the equation y = f(x) that defines the graph of f, except that the roles of x and y have been reversed. Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. This is equivalent to reflecting the graph across the line y = x. === Inverses and derivatives === By the inverse function theorem, a continuous function of a single variable f : A → R {\displaystyle f\colon A\to \mathbb {R} } (where A ⊆ R {\displaystyle A\subseteq \mathbb {R} } ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function f ( x ) = x 3 + x {\displaystyle f(x)=x^{3}+x} is invertible, since the derivative f′(x) = 3x2 + 1 is always positive. If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). If y = f(x), the derivative of the inverse is given by the inverse function theorem, ( f − 1 ) ′ ( y ) = 1 f ′ ( x ) . {\displaystyle \left(f^{-1}\right)^{\prime }(y)={\frac {1}{f'\left(x\right)}}.} Using Leibniz's notation the formula above can be written as d x d y = 1 d y / d x . {\displaystyle {\frac {dx}{dy}}={\frac {1}{dy/dx}}.} This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p. == Real-world examples == Let f be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, F = f ( C ) = 9 5 C + 32 ; {\displaystyle F=f(C)={\tfrac {9}{5}}C+32;} then its inverse function converts degrees Fahrenheit to degrees Celsius, C = f − 1 ( F ) = 5 9 ( F − 32 ) , {\displaystyle C=f^{-1}(F)={\tfrac {5}{9}}(F-32),} since f − 1 ( f ( C ) ) = f − 1 ( 9 5 C + 32 ) = 5 9 ( ( 9 5 C + 32 ) − 32 ) = C , for every value of C , and f ( f − 1 ( F ) ) = f ( 5 9 ( F − 32 ) ) = 9 5 ( 5 9 ( F − 32 ) ) + 32 = F , for every value of F . {\displaystyle {\begin{aligned}f^{-1}(f(C))={}&f^{-1}\left({\tfrac {9}{5}}C+32\right)={\tfrac {5}{9}}\left(({\tfrac {9}{5}}C+32)-32\right)=C,\\&{\text{for every value of }}C,{\text{ and }}\\[6pt]f\left(f^{-1}(F)\right)={}&f\left({\tfrac {5}{9}}(F-32)\right)={\tfrac {9}{5}}\left({\tfrac {5}{9}}(F-32)\right)+32=F,\\&{\text{for every value of }}F.\end{aligned}}} Suppose f assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, f ( Allan ) = 2005 , f ( Brad ) = 2007 , f ( Cary ) = 2001 f − 1 ( 2005 ) = Allan , f − 1 ( 2007 ) = Brad , f − 1 ( 2001 ) = Cary {\displaystyle {\begin{aligned}f({\text{Allan}})&=2005,\quad &f({\text{Brad}})&=2007,\quad &f({\text{Cary}})&=2001\\f^{-1}(2005)&={\text{Allan}},\quad &f^{-1}(2007)&={\text{Brad}},\quad &f^{-1}(2001)&={\text{Cary}}\end{aligned}}} Let R be the function that leads to an x percentage rise of some quantity, and F be the function producing an x percentage fall. Applied to $100 with x = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is pH = −log10[H+]. In many cases we need to find the concentration of acid from a pH measurement. The inverse function [H+] = 10−pH is used. == Generalizations == === Partial inverses === Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is not one-to-one, since x2 = (−x)2. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case f − 1 ( y ) = y . {\displaystyle f^{-1}(y)={\sqrt {y}}.} (If we instead restrict to the domain x ≤ 0, then the inverse is the negative of the square root of y.) === Full inverses === Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: f − 1 ( y ) = ± y . {\displaystyle f^{-1}(y)=\pm {\sqrt {y}}.} Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y). For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). === Trigonometric inverses === The above considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since sin ⁡ ( x + 2 π ) = sin ⁡ ( x ) {\displaystyle \sin(x+2\pi )=\sin(x)} for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). However, the sine is one-to-one on the interval [−⁠π/2⁠, ⁠π/2⁠], and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −⁠π/2⁠ and ⁠π/2⁠. The following table describes the principal branch of each inverse trigonometric function: === Left and right inverses === Function composition on the left and on the right need not coincide. In general, the conditions "There exists g such that g(f(x))=x" and "There exists g such that f(g(x))=x" imply different properties of f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1. ==== Left inverses ==== If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function g ∘ f = id X ⁡ . {\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}} That is, the function g satisfies the rule If f(x)=y, then g(y)=x. The function g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image. A function f with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If g is the left inverse of f, and f(x) = f(y), then g(f(x)) = g(f(y)) = x = y. If nonempty f: X → Y is injective, construct a left inverse g: Y → X as follows: for all y ∈ Y, if y is in the image of f, then there exists x ∈ X such that f(x) = y. Let g(y) = x; this definition is unique because f is injective. Otherwise, let g(y) be an arbitrary element of X.For all x ∈ X, f(x) is in the image of f. By construction, g(f(x)) = x, the condition for a left inverse. In classical mathematics, every injective function f with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion {0,1} → R of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set {0,1}. ==== Right inverses ==== A right inverse for f (or section of f ) is a function h: Y → X such that f ∘ h = id Y . {\displaystyle f\circ h=\operatorname {id} _{Y}.} That is, the function h satisfies the rule If h ( y ) = x {\displaystyle \displaystyle h(y)=x} , then f ( x ) = y . {\displaystyle \displaystyle f(x)=y.} Thus, h(y) may be any of the elements of X that map to y under f. A function f has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If h is the right inverse of f, then f is surjective. For all y ∈ Y {\displaystyle y\in Y} , there is x = h ( y ) {\displaystyle x=h(y)} such that f ( x ) = f ( h ( y ) ) = y {\displaystyle f(x)=f(h(y))=y} . If f is surjective, f has a right inverse h, which can be constructed as follows: for all y ∈ Y {\displaystyle y\in Y} , there is at least one x ∈ X {\displaystyle x\in X} such that f ( x ) = y {\displaystyle f(x)=y} (because f is surjective), so we choose one to be the value of h(y). ==== Two-sided inverses ==== An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If g {\displaystyle g} is a left inverse and h {\displaystyle h} a right inverse of f {\displaystyle f} , for all y ∈ Y {\displaystyle y\in Y} , g ( y ) = g ( f ( h ( y ) ) = h ( y ) {\displaystyle g(y)=g(f(h(y))=h(y)} . A function has a two-sided inverse if and only if it is bijective. A bijective function f is injective, so it has a left inverse (if f is the empty function, f : ∅ → ∅ {\displaystyle f\colon \varnothing \to \varnothing } is its own left inverse). f is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If f has a two-sided inverse g, then g is a left inverse and right inverse of f, so f is injective and surjective. === Preimages === If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y is defined to be the set of all elements of X that map to y: f − 1 ( y ) = { x ∈ X : f ( x ) = y } . {\displaystyle f^{-1}(y)=\left\{x\in X:f(x)=y\right\}.} The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f. The notion can be generalized to subsets of the range. Specifically, if S is any subset of Y, the preimage of S, denoted by f − 1 ( S ) {\displaystyle f^{-1}(S)} , is the set of all elements of X that map to S: f − 1 ( S ) = { x ∈ X : f ( x ) ∈ S } . {\displaystyle f^{-1}(S)=\left\{x\in X:f(x)\in S\right\}.} For example, take the function f: R → R; x ↦ x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. f − 1 ( { 1 , 4 , 9 , 16 } ) = { − 4 , − 3 , − 2 , − 1 , 1 , 2 , 3 , 4 } {\displaystyle f^{-1}(\left\{1,4,9,16\right\})=\left\{-4,-3,-2,-1,1,2,3,4\right\}} . The original notion and its generalization are related by the identity f − 1 ( y ) = f − 1 ( { y } ) , {\displaystyle f^{-1}(y)=f^{-1}(\{y\}),} The preimage of a single element y ∈ Y – a singleton set {y}  – is sometimes called the fiber of y. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set. == See also == Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing == Notes == == References == == Bibliography == Briggs, William; Cochran, Lyle (2011). Calculus / Early Transcendentals Single Variable. Addison-Wesley. ISBN 978-0-321-66414-3. Devlin, Keith J. (2004). Sets, Functions, and Logic / An Introduction to Abstract Mathematics (3 ed.). Chapman & Hall / CRC Mathematics. ISBN 978-1-58488-449-1. Fletcher, Peter; Patty, C. Wayne (1988). Foundations of Higher Mathematics. PWS-Kent. ISBN 0-87150-164-3. Lay, Steven R. (2006). Analysis / With an Introduction to Proof (4 ed.). Pearson / Prentice Hall. ISBN 978-0-13-148101-5. Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006). A Transition to Advanced Mathematics (6 ed.). Thompson Brooks/Cole. ISBN 978-0-534-39900-9. Thomas Jr., George Brinton (1972). Calculus and Analytic Geometry Part 1: Functions of One Variable and Analytic Geometry (Alternate ed.). Addison-Wesley. Wolf, Robert S. (1998). Proof, Logic, and Conjecture / The Mathematician's Toolbox. W. H. Freeman and Co. ISBN 978-0-7167-3050-7. == Further reading == Amazigo, John C.; Rubenfeld, Lester A. (1980). "Implicit Functions; Jacobians; Inverse Functions". Advanced Calculus and its Applications to the Engineering and Physical Sciences. New York: Wiley. pp. 103–120. ISBN 0-471-04934-4. Binmore, Ken G. (1983). "Inverse Functions". Calculus. New York: Cambridge University Press. pp. 161–197. ISBN 0-521-28952-1. Spivak, Michael (1994). Calculus (3 ed.). Publish or Perish. ISBN 0-914098-89-6. Stewart, James (2002). Calculus (5 ed.). Brooks Cole. ISBN 978-0-534-39339-7. == External links == "Inverse function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Invertible_function
Classical mechanics is a physical theory describing the motion of objects such as projectiles, parts of machinery, spacecraft, planets, stars, and galaxies. The development of classical mechanics involved substantial change in the methods and philosophy of physics. The qualifier classical distinguishes this type of mechanics from physics developed after the revolutions in physics of the early 20th century, all of which revealed limitations in classical mechanics. The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on the 17th century foundational works of Sir Isaac Newton, and the mathematical methods invented by Newton, Gottfried Wilhelm Leibniz, Leonhard Euler and others to describe the motion of bodies under the influence of forces. Later, methods based on energy were developed by Euler, Joseph-Louis Lagrange, William Rowan Hamilton and others, leading to the development of analytical mechanics (which includes Lagrangian mechanics and Hamiltonian mechanics). These advances, made predominantly in the 18th and 19th centuries, extended beyond earlier works; they are, with some modification, used in all areas of modern physics. If the present state of an object that obeys the laws of classical mechanics is known, it is possible to determine how it will move in the future, and how it has moved in the past. Chaos theory shows that the long term predictions of classical mechanics are not reliable. Classical mechanics provides accurate results when studying objects that are not extremely massive and have speeds not approaching the speed of light. With objects about the size of an atom's diameter, it becomes necessary to use quantum mechanics. To describe velocities approaching the speed of light, special relativity is needed. In cases where objects become extremely massive, general relativity becomes applicable. Some modern sources include relativistic mechanics in classical physics, as representing the field in its most developed and accurate form. == Branches == === Traditional division === Classical mechanics was traditionally divided into three main branches. Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. Kinematics describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it. Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics. === Forces vs. energy === Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity. In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in tangent bundle space (the tangent bundle of the configuration space and sometimes called "state space"), and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space (the cotangent bundle of the configuration space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. === By region of application === Alternatively, a division can be made by region of application: Celestial mechanics, relating to stars, planets and other celestial bodies Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases). Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light. Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials. == Description of objects and their motion == For simplicity, classical mechanics often models real-world objects as point particles, that is, objects with negligible size. The motion of a point particle is determined by a small number of parameters: its position, mass, and the forces applied to it. Classical mechanics also describes the more complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass". (These generalizations/extensions are derived from Newton's laws, say, by decomposing a solid body into a collection of points.) In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The behavior of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle. Classical mechanics assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance). === Kinematics === The position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O. In cases where P is moving relative to O, r is defined as a function of t, time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval that is observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space. ==== Velocity and speed ==== The velocity, or the rate of change of displacement with time, is defined as the derivative of the position with respect to time: v = d r d t {\displaystyle \mathbf {v} ={\mathrm {d} \mathbf {r} \over \mathrm {d} t}\,\!} . In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as −10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each object respectively, then the velocity of the first object as seen by the second object is: u ′ = u − v . {\displaystyle \mathbf {u} '=\mathbf {u} -\mathbf {v} \,.} Similarly, the first object sees the velocity of the second object as: v ′ = v − u . {\displaystyle \mathbf {v'} =\mathbf {v} -\mathbf {u} \,.} When both objects are moving in the same direction, this equation can be simplified to: u ′ = ( u − v ) d . {\displaystyle \mathbf {u} '=(u-v)\mathbf {d} \,.} Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v . {\displaystyle u'=u-v\,.} ==== Acceleration ==== The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time): a = d v d t = d 2 r d t 2 . {\displaystyle \mathbf {a} ={\mathrm {d} \mathbf {v} \over \mathrm {d} t}={\mathrm {d^{2}} \mathbf {r} \over \mathrm {d} t^{2}}.} Acceleration represents the velocity's change over time. Velocity can change in magnitude, direction, or both. Occasionally, a decrease in the magnitude of velocity "v" is referred to as deceleration, but generally any change in the velocity over time, including deceleration, is referred to as acceleration. ==== Frames of reference ==== While the position, velocity and acceleration of a particle can be described with respect to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is an idealized frame of reference within which an object with zero net force acting upon it moves with a constant velocity; that is, it is either at rest or moving uniformly in a straight line. In an inertial frame Newton's law of motion, F = m a {\displaystyle F=ma} , is valid.: 185  Non-inertial reference frames accelerate in relation to another inertial frame. A body rotating with respect to an inertial frame is not an inertial frame. When viewed from an inertial frame, particles in the non-inertial frame appear to move in ways not explained by forces from existing fields in the reference frame. Hence, it appears that there are other forces that enter the equations of motion solely as a result of the relative acceleration. These forces are referred to as fictitious forces, inertia forces, or pseudo-forces. Consider two reference frames S and S'. For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is measured the same in all reference frames, if we require x = x' when t = 0, then the relation between the space-time coordinates of the same event observed from the reference frames S' and S, which are moving at a relative velocity u in the x direction, is: x ′ = x − t u , y ′ = y , z ′ = z , t ′ = t . {\displaystyle {\begin{aligned}x'&=x-tu,\\y'&=y,\\z'&=z,\\t'&=t.\end{aligned}}} This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light. The transformations have the following consequences: v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S) a′ = a (the acceleration of a particle is the same in any inertial reference frame) F′ = F (the force on a particle is the same in any inertial reference frame) the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics. For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force. == Newtonian mechanics == A force in physics is any action that causes an object's velocity to change; that is, to accelerate. A force originates from within a field, such as an electro-static field (caused by static electrical charges), electro-magnetic field (caused by moving charges), or gravitational field (caused by mass), among others. Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's second law": F = d p d t = d ( m v ) d t . {\displaystyle \mathbf {F} ={\mathrm {d} \mathbf {p} \over \mathrm {d} t}={\mathrm {d} (m\mathbf {v} ) \over \mathrm {d} t}.} The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is a = dv/dt, the second law can be written in the simplified and more familiar form: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} \,.} So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion. As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example: F R = − λ v , {\displaystyle \mathbf {F} _{\rm {R}}=-\lambda \mathbf {v} \,,} where λ is a positive constant, the negative sign states that the force is opposite the sense of the velocity. Then the equation of motion is − λ v = m a = m d v d t . {\displaystyle -\lambda \mathbf {v} =m\mathbf {a} =m{\mathrm {d} \mathbf {v} \over \mathrm {d} t}\,.} This can be integrated to obtain v = v 0 e − λ t / m {\displaystyle \mathbf {v} =\mathbf {v} _{0}e^{{-\lambda t}/{m}}} where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), and the particle is slowing down. This expression can be further integrated to obtain the position r of the particle as a function of time. Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces. === Work and energy === If a constant force F is applied to a particle that makes a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors: W = F ⋅ Δ r . {\displaystyle W=\mathbf {F} \cdot \Delta \mathbf {r} \,.} More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral W = ∫ C F ( r ) ⋅ d r . {\displaystyle W=\int _{C}\mathbf {F} (\mathbf {r} )\cdot \mathrm {d} \mathbf {r} \,.} If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative. The kinetic energy Ek of a particle of mass m travelling at speed v is given by E k = 1 2 m v 2 . {\displaystyle E_{\mathrm {k} }={\tfrac {1}{2}}mv^{2}\,.} For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles. The work–energy theorem states that for a particle of constant mass m, the total work W done on the particle as it moves from position r1 to r2 is equal to the change in kinetic energy Ek of the particle: W = Δ E k = E k 2 − E k 1 = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\Delta E_{\mathrm {k} }=E_{\mathrm {k_{2}} }-E_{\mathrm {k_{1}} }={\tfrac {1}{2}}m\left(v_{2}^{\,2}-v_{1}^{\,2}\right).} Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep: F = − ∇ E p . {\displaystyle \mathbf {F} =-\mathbf {\nabla } E_{\mathrm {p} }\,.} If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force F ⋅ Δ r = − ∇ E p ⋅ Δ r = − Δ E p . {\displaystyle \mathbf {F} \cdot \Delta \mathbf {r} =-\mathbf {\nabla } E_{\mathrm {p} }\cdot \Delta \mathbf {r} =-\Delta E_{\mathrm {p} }\,.} The decrease in the potential energy is equal to the increase in the kinetic energy − Δ E p = Δ E k ⇒ Δ ( E k + E p ) = 0 . {\displaystyle -\Delta E_{\mathrm {p} }=\Delta E_{\mathrm {k} }\Rightarrow \Delta (E_{\mathrm {k} }+E_{\mathrm {p} })=0\,.} This result is known as conservation of energy and states that the total energy, ∑ E = E k + E p , {\displaystyle \sum E=E_{\mathrm {k} }+E_{\mathrm {p} }\,,} is constant in time. It is often useful, because many commonly encountered forces are conservative. == Lagrangian mechanics == Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair ( M , L ) {\textstyle (M,L)} consisting of a configuration space M {\textstyle M} and a smooth function L {\textstyle L} within that space called a Lagrangian. For many systems, L = T − V , {\textstyle L=T-V,} where T {\textstyle T} and V {\displaystyle V} are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from L {\textstyle L} must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations. == Hamiltonian mechanics == Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics. In this formalism, the dynamics of a system are governed by Hamilton's equations, which express the time derivatives of position and momentum variables in terms of partial derivatives of a function called the Hamiltonian: d q d t = ∂ H ∂ p , d p d t = − ∂ H ∂ q . {\displaystyle {\frac {\mathrm {d} {\boldsymbol {q}}}{\mathrm {d} t}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}},\quad {\frac {\mathrm {d} {\boldsymbol {p}}}{\mathrm {d} t}}=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}.} The Hamiltonian is the Legendre transform of the Lagrangian, and in many situations of physical interest it is equal to the total energy of the system. == Limits of validity == Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form. When both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) is of use. QFT deals with small distances, and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. When treating large degrees of freedom at the macroscopic level, statistical mechanics becomes useful. Statistical mechanics describes the behavior of large (but countable) numbers of particles and their interactions as a whole at the macroscopic level. Statistical mechanics is mainly used in thermodynamics for systems that lie outside the bounds of the assumptions of classical thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. In case that objects become extremely heavy (i.e., their Schwarzschild radius is not negligibly small for a given application), deviations from Newtonian mechanics become apparent and can be quantified by using the parameterized post-Newtonian formalism. In that case, general relativity (GR) becomes applicable. However, until now there is no theory of quantum gravity unifying GR and QFT in the sense that it could be used when objects become extremely small and heavy.[4][5] === Newtonian approximation to special relativity === In special relativity, the momentum of a particle is given by p = m v 1 − v 2 c 2 , {\displaystyle \mathbf {p} ={\frac {m\mathbf {v} }{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\,,} where m is the particle's rest mass, v its velocity, v is the modulus of v, and c is the speed of light. If v is very small compared to c, v2/c2 is approximately zero, and so p ≈ m v . {\displaystyle \mathbf {p} \approx m\mathbf {v} \,.} Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light. For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by f = f c m 0 m 0 + T c 2 , {\displaystyle f=f_{\mathrm {c} }{\frac {m_{0}}{m_{0}+{\frac {T}{c^{2}}}}}\,,} where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage. === Classical approximation to quantum mechanics === The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is λ = h p {\displaystyle \lambda ={\frac {h}{p}}} where h is the Planck constant and p is the momentum. Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 V, had a wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory. More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits. Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies. == History == The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering, and technology. The development of classical mechanics lead to the development of many areas of mathematics.: 54  Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These later became decisive factors in forming modern science, and their early application came to be known as classical mechanics. In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore introduced the concept of "positional gravity" and the use of component forces. The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia nova, published in 1609. He concluded, based on Tycho Brahe's observations on the orbit of Mars, that the planet's orbits were ellipses. This break with ancient thought was happening around the same time that Galileo was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of that particular experiment is disputed, but he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion was derived from the results of such experiments and forms a cornerstone of classical mechanics. In 1673 Christiaan Huygens described in his Horologium Oscillatorium the first two laws of motion. The work is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics. Newton founded his principles of natural philosophy on three proposed laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given the proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica. Here they are distinguished from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provides the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets. Newton had previously invented the calculus; however, the Principia was formulated entirely in terms of long-established geometric methods in emulation of Euclid. Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) he maintained his own corpuscular theory of light. After Newton, classical mechanics became a principal field of study in mathematics as well as physics. Mathematical formulations progressively allowed finding solutions to a far greater number of problems. The first notable mathematical treatment was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton. Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems led to the special theory of relativity, often still considered a part of classical mechanics. A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics. Since the end of the 20th century, classical mechanics in physics has no longer been an independent theory. Instead, classical mechanics is now considered an approximate theory to the more general quantum mechanics. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard Model and its more modern extensions into a unified theory of everything. Classical mechanics is a theory useful for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields. == See also == == Notes == == References == == Further reading == Alonso, M.; Finn, J. (1992). Fundamental University Physics. Addison-Wesley. Feynman, Richard (1999). The Feynman Lectures on Physics. Perseus Publishing. ISBN 978-0-7382-0092-7. Feynman, Richard; Phillips, Richard (1998). Six Easy Pieces. Perseus Publishing. ISBN 978-0-201-32841-7. Goldstein, Herbert; Charles P. Poole; John L. Safko (2002). Classical Mechanics (3rd ed.). Addison Wesley. ISBN 978-0-201-65702-9. Kibble, Tom W.B.; Berkshire, Frank H. (2004). Classical Mechanics (5th ed.). Imperial College Press. ISBN 978-1-86094-424-6. Kleppner, D.; Kolenkow, R.J. (1973). An Introduction to Mechanics. McGraw-Hill. ISBN 978-0-07-035048-9. Landau, L.D.; Lifshitz, E.M. (1972). Course of Theoretical Physics, Vol. 1 – Mechanics. Franklin Book Company. ISBN 978-0-08-016739-8. Morin, David (2008). Introduction to Classical Mechanics: With Problems and Solutions (1st ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-87622-3. Gerald Jay Sussman; Jack Wisdom (2001). Structure and Interpretation of Classical Mechanics. MIT Press. ISBN 978-0-262-19455-6. O'Donnell, Peter J. (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1-4665-8839-4. Thornton, Stephen T.; Marion, Jerry B. (2003). Classical Dynamics of Particles and Systems (5th ed.). Brooks Cole. ISBN 978-0-534-40896-1. == External links == Crowell, Benjamin. Light and Matter (an introductory text, uses algebra with optional sections involving calculus) Fitzpatrick, Richard. Classical Mechanics (uses calculus) Hoiland, Paul (2004). Preferred Frames of Reference & Relativity Horbatsch, Marko, "Classical Mechanics Course Notes". Rosu, Haret C., "Classical Mechanics". Physics Education. 1999. [arxiv.org : physics/9909035] Shapiro, Joel A. (2003). Classical Mechanics Sussman, Gerald Jay & Wisdom, Jack & Mayer, Meinhard E. (2001). Structure and Interpretation of Classical Mechanics Tong, David. Classical Dynamics (Cambridge lecture notes on Lagrangian and Hamiltonian formalism) Kinematic Models for Design Digital Library (KMODDL) Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering. MIT OpenCourseWare 8.01: Classical Mechanics Free videos of actual course lectures with links to lecture notes, assignments and exams. Alejandro A. Torassa, On Classical Mechanics
Wikipedia/Newtonian_physics
The reaction rate or rate of reaction is the speed at which a chemical reaction takes place, defined as proportional to the increase in the concentration of a product per unit time and to the decrease in the concentration of a reactant per unit time. Reaction rates can vary dramatically. For example, the oxidative rusting of iron under Earth's atmosphere is a slow reaction that can take many years, but the combustion of cellulose in a fire is a reaction that takes place in fractions of a second. For most reactions, the rate decreases as the reaction proceeds. A reaction's rate can be determined by measuring the changes in concentration over time. Chemical kinetics is the part of physical chemistry that concerns how rates of chemical reactions are measured and predicted, and how reaction-rate data can be used to deduce probable reaction mechanisms. The concepts of chemical kinetics are applied in many disciplines, such as chemical engineering, enzymology and environmental engineering. == Formal definition == Consider a typical balanced chemical reaction: a A + b B ⟶ p P + q Q {\displaystyle {\ce {{{\mathit {a}}A}+{{\mathit {b}}B}->{{\mathit {p}}P}+{{\mathit {q}}Q}}}} The lowercase letters (a, b, p, and q) represent stoichiometric coefficients, while the capital letters represent the reactants (A and B) and the products (P and Q). According to IUPAC's Gold Book definition the reaction rate ν {\displaystyle \nu } for a chemical reaction occurring in a closed system at constant volume, without a build-up of reaction intermediates, is defined as: ν = − 1 a d [ A ] d t = − 1 b d [ B ] d t = 1 p d [ P ] d t = 1 q d [ Q ] d t {\displaystyle \nu =-{\frac {1}{a}}{\frac {d[\mathrm {A} ]}{dt}}=-{\frac {1}{b}}{\frac {d[\mathrm {B} ]}{dt}}={\frac {1}{p}}{\frac {d[\mathrm {P} ]}{dt}}={\frac {1}{q}}{\frac {d[\mathrm {Q} ]}{dt}}} where [X] denotes the concentration of the substance X (= A, B, P or Q). The reaction rate thus defined has the units of mol/L/s. The rate of a reaction is always positive. A negative sign is present to indicate that the reactant concentration is decreasing. The IUPAC recommends that the unit of time should always be the second. The rate of reaction differs from the rate of increase of concentration of a product P by a constant factor (the reciprocal of its stoichiometric number) and for a reactant A by minus the reciprocal of the stoichiometric number. The stoichiometric numbers are included so that the defined rate is independent of which reactant or product species is chosen for measurement.: 349  For example, if a = 1 and b = 3 then B is consumed three times more rapidly than A, but ν = − d [ A ] d t = − 1 3 d [ B ] d t {\displaystyle \nu =-{\tfrac {d[\mathrm {A} ]}{dt}}=-{\tfrac {1}{3}}{\tfrac {d[\mathrm {B} ]}{dt}}} is uniquely defined. An additional advantage of this definition is that for an elementary and irreversible reaction, ν {\displaystyle \nu } is equal to the product of the probability of overcoming the transition state activation energy and the number of times per second the transition state is approached by reactant molecules. When so defined, for an elementary and irreversible reaction, ν {\displaystyle \nu } is the rate of successful chemical reaction events leading to the product. The above definition is only valid for a single reaction, in a closed system of constant volume. If water is added to a pot containing salty water, the concentration of salt decreases, although there is no chemical reaction. For an open system, the full mass balance must be taken into account: F A 0 − F A + ∫ 0 V ν d V = d N A d t in − out + ( generation − consumption ) = accumulation {\displaystyle {\begin{array}{ccccccc}F_{\mathrm {A} _{0}}&-&F_{\mathrm {A} }&+&\displaystyle \int _{0}^{V}\nu \,dV&=&\displaystyle {\frac {dN_{\mathrm {A} }}{dt}}\\{\text{in}}&-&{\text{out}}&+&\left({{\text{generation }}- \atop {\text{consumption}}}\right)&=&{\text{accumulation}}\end{array}}} where FA0 is the inflow rate of A in molecules per second; FA the outflow; ν {\displaystyle \nu } is the instantaneous reaction rate of A (in number concentration rather than molar) in a given differential volume, integrated over the entire system volume V at a given moment. When applied to the closed system at constant volume considered previously, this equation reduces to: ν = d [ A ] d t {\displaystyle \nu ={\frac {d[A]}{dt}}} where the concentration [A] is related to the number of molecules NA by [ A ] = N A N 0 V . {\displaystyle [\mathrm {A} ]={\tfrac {N_{\rm {A}}}{N_{0}V}}.} Here N0 is the Avogadro constant. For a single reaction in a closed system of varying volume the so-called rate of conversion can be used, in order to avoid handling concentrations. It is defined as the derivative of the extent of reaction with respect to time. ν = d ξ d t = 1 ν i d n i d t = 1 ν i d ( C i V ) d t = 1 ν i ( V d C i d t + C i d V d t ) {\displaystyle \nu ={\frac {d\xi }{dt}}={\frac {1}{\nu _{i}}}{\frac {dn_{i}}{dt}}={\frac {1}{\nu _{i}}}{\frac {d(C_{i}V)}{dt}}={\frac {1}{\nu _{i}}}\left(V{\frac {dC_{i}}{dt}}+C_{i}{\frac {dV}{dt}}\right)} Here νi is the stoichiometric coefficient for substance i, equal to a, b, p, and q in the typical reaction above. Also, V is the volume of reaction and Ci is the concentration of substance i. When side products or reaction intermediates are formed, the IUPAC recommends the use of the terms the rate of increase of concentration and rate of the decrease of concentration for products and reactants, properly. Reaction rates may also be defined on a basis that is not the volume of the reactor. When a catalyst is used, the reaction rate may be stated on a catalyst weight (mol g−1 s−1) or surface area (mol m−2 s−1) basis. If the basis is a specific catalyst site that may be rigorously counted by a specified method, the rate is given in units of s−1 and is called a turnover frequency. == Influencing factors == Factors that influence the reaction rate are the nature of the reaction, concentration, pressure, reaction order, temperature, solvent, electromagnetic radiation, catalyst, isotopes, surface area, stirring, and diffusion limit. Some reactions are naturally faster than others. The number of reacting species, their physical state (the particles that form solids move much more slowly than those of gases or those in solution), the complexity of the reaction and other factors can greatly influence the rate of a reaction. Reaction rate increases with concentration, as described by the rate law and explained by collision theory. As reactant concentration increases, the frequency of collision increases. The rate of gaseous reactions increases with pressure, which is, in fact, equivalent to an increase in the concentration of the gas. The reaction rate increases in the direction where there are fewer moles of gas and decreases in the reverse direction. For condensed-phase reactions, the pressure dependence is weak. The order of the reaction controls how the reactant concentration (or pressure) affects the reaction rate. Usually conducting a reaction at a higher temperature delivers more energy into the system and increases the reaction rate by causing more collisions between particles, as explained by collision theory. However, the main reason that temperature increases the rate of reaction is that more of the colliding particles will have the necessary activation energy resulting in more successful collisions (when bonds are formed between reactants). The influence of temperature is described by the Arrhenius equation. For example, coal burns in a fireplace in the presence of oxygen, but it does not when it is stored at room temperature. The reaction is spontaneous at low and high temperatures but at room temperature, its rate is so slow that it is negligible. The increase in temperature, as created by a match, allows the reaction to start and then it heats itself because it is exothermic. That is valid for many other fuels, such as methane, butane, and hydrogen. Reaction rates can be independent of temperature (non-Arrhenius) or decrease with increasing temperature (anti-Arrhenius). Reactions without an activation barrier (for example, some radical reactions), tend to have anti-Arrhenius temperature dependence: the rate constant decreases with increasing temperature. Many reactions take place in solution and the properties of the solvent affect the reaction rate. The ionic strength also has an effect on the reaction rate. Electromagnetic radiation is a form of energy. As such, it may speed up the rate or even make a reaction spontaneous as it provides the particles of the reactants with more energy. This energy is in one way or another stored in the reacting particles (it may break bonds, and promote molecules to electronically or vibrationally excited states...) creating intermediate species that react easily. As the intensity of light increases, the particles absorb more energy and hence the rate of reaction increases. For example, when methane reacts with chlorine in the dark, the reaction rate is slow. It can be sped up when the mixture is put under diffused light. In bright sunlight, the reaction is explosive. The presence of a catalyst increases the reaction rate (in both the forward and reverse reactions) by providing an alternative pathway with a lower activation energy. For example, platinum catalyzes the combustion of hydrogen with oxygen at room temperature. The kinetic isotope effect consists of a different reaction rate for the same molecule if it has different isotopes, usually hydrogen isotopes, because of the relative mass difference between hydrogen and deuterium. In reactions on surfaces, which take place, for example, during heterogeneous catalysis, the rate of reaction increases as the surface area does. That is because more particles of the solid are exposed and can be hit by reactant molecules. Stirring can have a strong effect on the rate of reaction for heterogeneous reactions. Some reactions are limited by diffusion. All the factors that affect a reaction rate, except for concentration and reaction order, are taken into account in the reaction rate coefficient (the coefficient in the rate equation of the reaction). == Rate equation == For a chemical reaction aA + bB → pP + qQ, the rate equation or rate law is a mathematical expression used in chemical kinetics to link the rate of a reaction to the concentration of each reactant. For a closed system at constant volume, this is often of the form v = k [ A ] n [ B ] m − k r [ P ] i [ Q ] j . {\displaystyle v=k[\mathrm {A} ]^{n}[\mathrm {B} ]^{m}-k_{r}[\mathrm {P} ]^{i}[\mathrm {Q} ]^{j}.} For reactions that go to completion (which implies very small kr), or if only the initial rate is analyzed (with initial vanishing product concentrations), this simplifies to the commonly quoted form v = k ( T ) [ A ] n [ B ] m . {\displaystyle v=k(T)[\mathrm {A} ]^{n}[\mathrm {B} ]^{m}.} For gas phase reaction the rate equation is often alternatively expressed in terms of partial pressures. In these equations k(T) is the reaction rate coefficient or rate constant, although it is not really a constant, because it includes all the parameters that affect reaction rate, except for time and concentration. Of all the parameters influencing reaction rates, temperature is normally the most important one and is accounted for by the Arrhenius equation. The exponents n and m are called reaction orders and depend on the reaction mechanism. For an elementary (single-step) reaction, the order with respect to each reactant is equal to its stoichiometric coefficient. For complex (multistep) reactions, however, this is often not true and the rate equation is determined by the detailed mechanism, as illustrated below for the reaction of H2 and NO. For elementary reactions or reaction steps, the order and stoichiometric coefficient are both equal to the molecularity or number of molecules participating. For a unimolecular reaction or step, the rate is proportional to the concentration of molecules of reactant, so the rate law is first order. For a bimolecular reaction or step, the number of collisions is proportional to the product of the two reactant concentrations, or second order. A termolecular step is predicted to be third order, but also very slow as simultaneous collisions of three molecules are rare. By using the mass balance for the system in which the reaction occurs, an expression for the rate of change in concentration can be derived. For a closed system with constant volume, such an expression can look like d [ P ] d t = k ( T ) [ A ] n [ B ] m . {\displaystyle {\frac {d[\mathrm {P} ]}{dt}}=k(T)[\mathrm {A} ]^{n}[\mathrm {B} ]^{m}.} === Example of a complex reaction: hydrogen and nitric oxide === For the reaction 2 H 2 ( g ) + 2 NO ( g ) ⟶ N 2 ( g ) + 2 H 2 O ( g ) , {\displaystyle {\ce {2H2_{(g)}}}+{\ce {2NO_{(g)}-> N2_{(g)}}}+{\ce {2H2O_{(g)}}},} the observed rate equation (or rate expression) is v = k [ H 2 ] [ NO ] 2 . {\displaystyle v=k[{\ce {H2}}][{\ce {NO}}]^{2}.} As for many reactions, the experimental rate equation does not simply reflect the stoichiometric coefficients in the overall reaction: It is third order overall: first order in H2 and second order in NO, even though the stoichiometric coefficients of both reactants are equal to 2. In chemical kinetics, the overall reaction rate is often explained using a mechanism consisting of a number of elementary steps. Not all of these steps affect the rate of reaction; normally the slowest elementary step controls the reaction rate. For this example, a possible mechanism is 1 ) 2 NO ( g ) ↽ − − ⇀ N 2 O 2 ( g ) ( fast equilibrium ) 2 ) N 2 O 2 + H 2 ⟶ N 2 O + H 2 O ( slow ) 3 ) N 2 O + H 2 ⟶ N 2 + H 2 O ( fast ) . {\displaystyle {\begin{array}{rll}1)&\quad {\ce {2NO_{(g)}<=> N2O2_{(g)}}}&({\text{fast equilibrium}})\\2)&\quad {\ce {N2O2 + H2 -> N2O + H2O}}&({\text{slow}})\\3)&\quad {\ce {N2O + H2 -> N2 + H2O}}&({\text{fast}}).\end{array}}} Reactions 1 and 3 are very rapid compared to the second, so the slow reaction 2 is the rate-determining step. This is a bimolecular elementary reaction whose rate is given by the second-order equation v = k 2 [ H 2 ] [ N 2 O 2 ] , {\displaystyle v=k_{2}[{\ce {H2}}][{\ce {N2O2}}],} where k2 is the rate constant for the second step. However N2O2 is an unstable intermediate whose concentration is determined by the fact that the first step is in equilibrium, so that [ N 2 O 2 ] = K 1 [ NO ] 2 , {\displaystyle {\ce {[N2O2]={\mathit {K}}_{1}[NO]^{2}}},} where K1 is the equilibrium constant of the first step. Substitution of this equation in the previous equation leads to a rate equation expressed in terms of the original reactants v = k 2 K 1 [ H 2 ] [ NO ] 2 . {\displaystyle v=k_{2}K_{1}[{\ce {H2}}][{\ce {NO}}]^{2}\,.} This agrees with the form of the observed rate equation if it is assumed that k = k2K1. In practice the rate equation is used to suggest possible mechanisms which predict a rate equation in agreement with experiment. The second molecule of H2 does not appear in the rate equation because it reacts in the third step, which is a rapid step after the rate-determining step, so that it does not affect the overall reaction rate. == Temperature dependence == Each reaction rate coefficient k has a temperature dependency, which is usually given by the Arrhenius equation: k = A exp ⁡ ( − E a R T ) {\displaystyle k=A\exp \left(-{\frac {E_{\mathrm {a} }}{RT}}\right)} where A, is the pre-exponential factor or frequency factor, exp is the exponential function, Ea is the activation energy, R is the gas constant. Since at temperature T the molecules have energies given by a Boltzmann distribution, one can expect the number of collisions with energy greater than Ea to be proportional to exp ⁡ ( − E a R T ) {\displaystyle \exp \left({\tfrac {-E_{\rm {a}}}{RT}}\right)} . The values for A and Ea are dependent on the reaction. There are also more complex equations possible, which describe the temperature dependence of other rate constants that do not follow this pattern. Temperature is a measure of the average kinetic energy of the reactants. As temperature increases, the kinetic energy of the reactants increases. That is, the particles move faster. With the reactants moving faster this allows more collisions to take place at a greater speed, so the chance of reactants forming into products increases, which in turn results in the rate of reaction increasing. A rise of ten degrees Celsius results in approximately twice the reaction rate. The minimum kinetic energy required for a reaction to occur is called the activation energy and is denoted by Ea or ΔG‡. The transition state or activated complex shown on the diagram is the energy barrier that must be overcome when changing reactants into products. The molecules with an energy greater than this barrier have enough energy to react. For a successful collision to take place, the collision geometry must be right, meaning the reactant molecules must face the right way so the activated complex can be formed. A chemical reaction takes place only when the reacting particles collide. However, not all collisions are effective in causing the reaction. Products are formed only when the colliding particles possess a certain minimum energy called threshold energy. As a rule of thumb, reaction rates for many reactions double for every ten degrees Celsius increase in temperature. For a given reaction, the ratio of its rate constant at a higher temperature to its rate constant at a lower temperature is known as its temperature coefficient, (Q). Q10 is commonly used as the ratio of rate constants that are ten degrees Celsius apart. == Pressure dependence == The pressure dependence of the rate constant for condensed-phase reactions (that is, when reactants and products are solids or liquid) is usually sufficiently weak in the range of pressures normally encountered in industry that it is neglected in practice. The pressure dependence of the rate constant is associated with the activation volume. For the reaction proceeding through an activation-state complex: A + B ↽ − − ⇀ | A ⋯ B | ‡ ⟶ P {\displaystyle {\ce {A + B <=>}}\ |{\ce {A}}\cdots {\ce {B}}|^{\ddagger }\ {\ce {-> P}}} the activation volume, ΔV ‡, is: Δ V ‡ = V ¯ ‡ − V ¯ A − V ¯ B {\displaystyle \Delta V^{\ddagger }={\bar {V}}_{\ddagger }-{\bar {V}}_{\mathrm {A} }-{\bar {V}}_{\mathrm {B} }} where V̄ denotes the partial molar volume of a species and ‡ (a double dagger) indicates the activation-state complex. For the above reaction, one can expect the change of the reaction rate constant (based either on mole fraction or on molar concentration) with pressure at constant temperature to be:: 390  ( ∂ ln ⁡ k x ∂ P ) T = − Δ V ‡ R T {\displaystyle \left({\frac {\partial \ln k_{x}}{\partial P}}\right)_{T}=-{\frac {\Delta V^{\ddagger }}{RT}}} In practice, the matter can be complicated because the partial molar volumes and the activation volume can themselves be a function of pressure. Reactions can increase or decrease their rates with pressure, depending on the value of ΔV ‡. As an example of the possible magnitude of the pressure effect, some organic reactions were shown to double the reaction rate when the pressure was increased from atmospheric (0.1 MPa) to 50 MPa (which gives ΔV ‡ = −0.025 L/mol). == See also == Diffusion-controlled reaction Dilution (equation) Isothermal microcalorimetry Rate of solution Steady state approximation == Notes == == External links == Chemical kinetics, reaction rate, and order (needs flash player) Reaction kinetics, examples of important rate laws (lecture with audio). Rates of reaction Overview of Bimolecular Reactions (Reactions involving two reactants) pressure dependence Can. J. Chem.
Wikipedia/Reaction_rate
In stochastic processes, the Stratonovich integral or Fisk–Stratonovich integral (developed simultaneously by Ruslan Stratonovich and Donald Fisk) is a stochastic integral, the most common alternative to the Itô integral. Although the Itô integral is the usual choice in applied mathematics, the Stratonovich integral is frequently used in physics. In some circumstances, integrals in the Stratonovich definition are easier to manipulate. Unlike the Itô calculus, Stratonovich integrals are defined such that the chain rule of ordinary calculus holds. Perhaps the most common situation in which these are encountered is as the solution to Stratonovich stochastic differential equations (SDEs). These are equivalent to Itô SDEs and it is possible to convert between the two whenever one definition is more convenient. == Definition == The Stratonovich integral can be defined in a manner similar to the Riemann integral, that is as a limit of Riemann sums. Suppose that W : [ 0 , T ] × Ω → R {\displaystyle W:[0,T]\times \Omega \to \mathbb {R} } is a Wiener process and X : [ 0 , T ] × Ω → R {\displaystyle X:[0,T]\times \Omega \to \mathbb {R} } is a semimartingale adapted to the natural filtration of the Wiener process. Then the Stratonovich integral ∫ 0 T X t ∘ d W t {\displaystyle \int _{0}^{T}X_{t}\circ \mathrm {d} W_{t}} is a random variable : Ω → R {\displaystyle :\Omega \to \mathbb {R} } defined as the limit in mean square of ∑ i = 0 k − 1 X t i + 1 + X t i 2 ( W t i + 1 − W t i ) {\displaystyle \sum _{i=0}^{k-1}{\frac {X_{t_{i+1}}+X_{t_{i}}}{2}}\left(W_{t_{i+1}}-W_{t_{i}}\right)} as the mesh of the partition 0 = t 0 < t 1 < ⋯ < t k = T {\displaystyle 0=t_{0}<t_{1}<\dots <t_{k}=T} of [ 0 , T ] {\displaystyle [0,T]} tends to 0 (in the style of a Riemann–Stieltjes integral). The circle ∘ {\displaystyle \circ } is a notational device, used to distinguish this integral from the Itô integral. == Calculation == Many integration techniques of ordinary calculus can be used for the Stratonovich integral, e.g.: if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a smooth function, then ∫ 0 T f ′ ( W t ) ∘ d W t = f ( W T ) − f ( W 0 ) {\displaystyle \int _{0}^{T}f'(W_{t})\circ \mathrm {d} W_{t}=f(W_{T})-f(W_{0})} and more generally, if f : R × R → R {\displaystyle f:\mathbb {R} \times \mathbb {R} \to \mathbb {R} } is a smooth function, then ∫ 0 T ∂ f ∂ W ( W t , t ) ∘ d W t + ∫ 0 T ∂ f ∂ t ( W t , t ) d t = f ( W T , T ) − f ( W 0 , 0 ) . {\displaystyle \int _{0}^{T}{\partial f \over \partial W}(W_{t},t)\circ \mathrm {d} W_{t}+\int _{0}^{T}{\partial f \over \partial t}(W_{t},t)\,\mathrm {d} t=f(W_{T},T)-f(W_{0},0).} This latter rule is akin to the chain rule of ordinary calculus. === Numerical methods === Stochastic integrals can rarely be solved in analytic form, making stochastic numerical integration an important topic in all uses of stochastic integrals. Various numerical approximations converge to the Stratonovich integral, and variations of these are used to solve Stratonovich SDEs (Kloeden & Platen 1992). Note however that the most widely used Euler scheme (the Euler–Maruyama method) for the numeric solution of Langevin equations requires the equation to be in Itô form. == Differential notation == If X t , Y t {\displaystyle X_{t},Y_{t}} , and Z t {\displaystyle Z_{t}} are stochastic processes such that X T − X 0 = ∫ 0 T Y t ∘ d W t + ∫ 0 T Z t d t {\displaystyle X_{T}-X_{0}=\int _{0}^{T}Y_{t}\circ \mathrm {d} W_{t}+\int _{0}^{T}Z_{t}\,\mathrm {d} t} for all T > 0 {\displaystyle T>0} , we also write d X = Y ∘ d W + Z d t . {\displaystyle \mathrm {d} X=Y\circ \mathrm {d} W+Z\,\mathrm {d} t.} This notation is often used to formulate stochastic differential equations (SDEs), which are really equations about stochastic integrals. It is compatible with the notation from ordinary calculus, for instance d ( t 2 W 3 ) = 3 t 2 W 2 ∘ d W + 2 t W 3 d t . {\displaystyle \mathrm {d} (t^{2}\,W^{3})=3t^{2}W^{2}\circ \mathrm {d} W+2tW^{3}\,\mathrm {d} t.} == Comparison with the Itô integral == The Itô integral of the process X {\displaystyle X} with respect to the Wiener process W {\displaystyle W} is denoted by ∫ 0 T X t d W t {\displaystyle \int _{0}^{T}X_{t}\,\mathrm {d} W_{t}} (without the circle). For its definition, the same procedure is used as above in the definition of the Stratonovich integral, except for choosing the value of the process X {\displaystyle X} at the left-hand endpoint of each subinterval, i.e., X t i {\displaystyle X_{t_{i}}} in place of X t i + 1 + X t i 2 {\displaystyle {\frac {X_{t_{i+1}}+X_{t_{i}}}{2}}} This integral does not obey the ordinary chain rule as the Stratonovich integral does; instead one has to use the slightly more complicated Itô's lemma. Conversion between Itô and Stratonovich integrals may be performed using the formula ∫ 0 T f ( W t , t ) ∘ d W t = 1 2 ∫ 0 T ∂ f ∂ W ( W t , t ) d t + ∫ 0 T f ( W t , t ) d W t , {\displaystyle \int _{0}^{T}f(W_{t},t)\circ \mathrm {d} W_{t}={\frac {1}{2}}\int _{0}^{T}{\frac {\partial f}{\partial W}}(W_{t},t)\,\mathrm {d} t+\int _{0}^{T}f(W_{t},t)\,\mathrm {d} W_{t},} where f {\displaystyle f} is any continuously differentiable function of two variables W {\displaystyle W} and t {\displaystyle t} and the last integral is an Itô integral (Kloeden & Platen 1992, p. 101). Langevin equations exemplify the importance of specifying the interpretation (Stratonovich or Itô) in a given problem. Suppose X t {\displaystyle X_{t}} is a time-homogeneous Itô diffusion with continuously differentiable diffusion coefficient σ {\displaystyle \sigma } , i.e. it satisfies the SDE d X t = μ ( X t ) d t + σ ( X t ) d W t {\displaystyle \mathrm {d} X_{t}=\mu (X_{t})\,\mathrm {d} t+\sigma (X_{t})\,\mathrm {d} W_{t}} . In order to get the corresponding Stratonovich version, the term σ ( X t ) d W t {\displaystyle \sigma (X_{t})\,\mathrm {d} W_{t}} (in Itô interpretation) should translate to σ ( X t ) ∘ d W t {\displaystyle \sigma (X_{t})\circ \mathrm {d} W_{t}} (in Stratonovich interpretation) as ∫ 0 T σ ( X t ) ∘ d W t = 1 2 ∫ 0 T d σ d x ( X t ) σ ( X t ) d t + ∫ 0 T σ ( X t ) d W t . {\displaystyle \int _{0}^{T}\sigma (X_{t})\circ \mathrm {d} W_{t}={\frac {1}{2}}\int _{0}^{T}{\frac {d\sigma }{dx}}(X_{t})\sigma (X_{t})\,\mathrm {d} t+\int _{0}^{T}\sigma (X_{t})\,\mathrm {d} W_{t}.} Obviously, if σ {\displaystyle \sigma } is independent of X t {\displaystyle X_{t}} , the two interpretations will lead to the same form for the Langevin equation. In that case, the noise term is called "additive" (since the noise term d W t {\displaystyle dW_{t}} is multiplied by only a fixed coefficient). Otherwise, if σ = σ ( X t ) {\displaystyle \sigma =\sigma (X_{t})} , the Langevin equation in Itô form may in general differ from that in Stratonovich form, in which case the noise term is called multiplicative (i.e., the noise d W t {\displaystyle dW_{t}} is multiplied by a function of X t {\displaystyle X_{t}} that is σ ( X t ) {\displaystyle \sigma (X_{t})} ). More generally, for any two semimartingales X {\displaystyle X} and Y {\displaystyle Y} ∫ 0 T X s − ∘ d Y s = ∫ 0 T X s − d Y s + 1 2 [ X , Y ] T c , {\displaystyle \int _{0}^{T}X_{s-}\circ \mathrm {d} Y_{s}=\int _{0}^{T}X_{s-}\,\mathrm {d} Y_{s}+{\frac {1}{2}}[X,Y]_{T}^{c},} where [ X , Y ] T c {\displaystyle [X,Y]_{T}^{c}} is the continuous part of the covariation. == Stratonovich integrals in applications == The Stratonovich integral lacks the important property of the Itô integral, which does not "look into the future". In many real-world applications, such as modelling stock prices, one only has information about past events, and hence the Itô interpretation is more natural. In financial mathematics the Itô interpretation is usually used. In physics, however, stochastic integrals occur as the solutions of Langevin equations. A Langevin equation is a coarse-grained version of a more microscopic model (Risken 1996); depending on the problem in consideration, Stratonovich or Itô interpretation or even more exotic interpretations such as the isothermal interpretation, are appropriate. The Stratonovich interpretation is the most frequently used interpretation within the physical sciences. The Wong–Zakai theorem states that physical systems with non-white noise spectrum characterized by a finite noise correlation time τ {\displaystyle \tau } can be approximated by a Langevin equations with white noise in Stratonovich interpretation in the limit where τ {\displaystyle \tau } tends to zero. Because the Stratonovich calculus satisfies the ordinary chain rule, stochastic differential equations (SDEs) in the Stratonovich sense are more straightforward to define on differentiable manifolds, rather than just on R n {\displaystyle \mathbb {R} ^{n}} . The tricky chain rule of the Itô calculus makes it a more awkward choice for manifolds. == Stratonovich interpretation and supersymmetric theory of SDEs == In the supersymmetric theory of SDEs, one considers the evolution operator obtained by averaging the pullback induced on the exterior algebra of the phase space by the stochastic flow determined by an SDE. In this context, it is then natural to use the Stratonovich interpretation of SDEs. == Notes == == References == Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin. ISBN 3-540-04758-1. Gardiner, Crispin W. (2004). Handbook of Stochastic Methods (3 ed.). Springer, Berlin Heidelberg. ISBN 3-540-20882-8. Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical finance: The early years, 1880–1970". IMS Lecture Notes Monograph. 45: 1–17. CiteSeerX 10.1.1.114.632. Kloeden, Peter E.; Platen, Eckhard (1992). Numerical solution of stochastic differential equations. Applications of Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-3-540-54062-5.. Risken, Hannes (1996). The Fokker-Planck Equation. Springer Series in Synergetics. Berlin, Heidelberg: Springer-Verlag. ISBN 978-3-540-61530-9..
Wikipedia/Stratonovich_stochastic_calculus
In physics, Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems using the Langevin equation. It was originally developed by French physicist Paul Langevin. The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations. Langevin dynamics simulations are a kind of Monte Carlo simulation. == Overview == Real world molecular systems occur in air or solvents, rather than in isolation, in a vacuum. Jostling of solvent or air molecules causes friction, and the occasional high velocity collision will perturb the system. Langevin dynamics attempts to extend molecular dynamics to allow for these effects. Also, Langevin dynamics allows temperature to be controlled as with a thermostat, thus approximating the canonical ensemble. Langevin dynamics mimics the viscous aspect of a solvent. It does not fully model an implicit solvent; specifically, the model does not account for the electrostatic screening and also not for the hydrophobic effect. For denser solvents, hydrodynamic interactions are not captured via Langevin dynamics. For a system of N {\displaystyle N} particles with masses M {\displaystyle M} , with coordinates X = X ( t ) {\displaystyle X=X(t)} that constitute a time-dependent random variable, the resulting Langevin equation is M X ¨ = − ∇ U ( X ) − γ M X ˙ + 2 M γ k B T R ( t ) , {\displaystyle M\,{\ddot {\mathbf {X} }}=-\mathbf {\nabla } U(\mathbf {X} )-\gamma \,M\,{\dot {\mathbf {X} }}+{\sqrt {2\,M\,\gamma \,k_{\rm {B}}T}}\,\mathbf {R} (t)\,,} where U ( X ) {\displaystyle U(\mathbf {X} )} is the particle interaction potential; ∇ {\displaystyle \nabla } is the gradient operator such that − ∇ U ( X ) {\displaystyle -\mathbf {\nabla } U(\mathbf {X} )} is the force calculated from the particle interaction potentials; the dot is a time derivative such that X ˙ {\displaystyle {\dot {\mathbf {X} }}} is the velocity and X ¨ {\displaystyle {\ddot {\mathbf {X} }}} is the acceleration; γ {\displaystyle \gamma } is the damping constant (units of reciprocal time), also known as the collision frequency; T {\displaystyle T} is the temperature, k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant; and R ( t ) {\displaystyle \mathbf {R} (t)} is a delta-correlated stationary Gaussian process with zero-mean, called Gaussian white noise, satisfying ⟨ R ( t ) ⟩ = 0 {\displaystyle \left\langle \mathbf {R} (t)\right\rangle =0} ⟨ R ( t ) ⋅ R ( t ′ ) ⟩ = δ ( t − t ′ ) {\displaystyle \left\langle \mathbf {R} (t)\cdot \mathbf {R} (t')\right\rangle =\delta (t-t')} Here, δ {\displaystyle \delta } is the Dirac delta. === Stochastic Differential Formulation === Considering the covariance of standard Brownian motion or Wiener process W t {\displaystyle W_{t}} , we can find that E ( W t W τ ) = min ( t , τ ) {\displaystyle \mathbb {E} (W_{t}W_{\tau })=\min(t,\tau )} Define the covariance matrix of the derivative as E ( W t ˙ W τ ˙ ) = ∂ ∂ t ∂ ∂ τ E ( W t W τ ) = ∂ ∂ t ∂ ∂ τ min ( t , τ ) = δ ( t − τ ) {\displaystyle \mathbb {E} ({\dot {W_{t}}}{\dot {W_{\tau }}})={\frac {\partial }{\partial t}}{\frac {\partial }{\partial \tau }}\mathbb {E} (W_{t}W_{\tau })={\frac {\partial }{\partial t}}{\frac {\partial }{\partial \tau }}\min(t,\tau )=\delta (t-\tau )} So under the sense of covariance we can say that d W t = R ( t ) d t {\displaystyle {\rm {d}}W_{t}=\mathbf {R} (t){\rm {d}}t} Without loss of generality, let the mass M = 1 {\displaystyle M=1} , σ = M γ k B T {\displaystyle \sigma ={\sqrt {M\gamma k_{\rm {B}}T}}} , then the original SDE will become d X ˙ = − ∇ U ( X ) d t − γ d X + 2 σ d W ( t ) {\displaystyle {\rm {d}}{\dot {\mathbf {X} }}=-\nabla U(\mathbf {X} ){\rm {d}}t-\gamma {\rm {d}}{\mathbf {X} }+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} === Overdamped Langevin dynamics === If the main objective is to control temperature, care should be exercised to use a small damping constant γ {\displaystyle \gamma } . As γ {\displaystyle \gamma } grows, it spans from the inertial all the way to the diffusive (Brownian) regime. The Langevin dynamics limit of non-inertia is commonly described as Brownian dynamics. Brownian dynamics can be considered as overdamped Langevin dynamics, i.e. Langevin dynamics where no average acceleration takes place. Under this limit we have d X ˙ = 0 {\displaystyle {\rm {d}}{\dot {X}}=0} , the original SDE then will becomes d X = − 1 γ ∇ U ( X ) d t + 2 σ γ d W ( t ) {\displaystyle {\rm {d}}{\mathbf {X} }=-{\frac {1}{\gamma }}\nabla U(\mathbf {X} ){\rm {d}}t+{\frac {{\sqrt {2}}\sigma }{\gamma }}{\rm {d}}\mathbf {W} (t)} The translational Langevin equation can be solved using various numerical methods with differences in the sophistication of analytical solutions, the allowed time-steps, time-reversibility (symplectic methods), in the limit of zero friction, etc. The Langevin equation can be generalized to rotational dynamics of molecules, Brownian particles, etc. A standard (according to NIST) way to do it is to leverage a quaternion-based description of the stochastic rotational motion. == Applications == === Langevin thermostat === Langevin thermostat is a type of Thermostat algorithm in molecular dynamics, which is used to simulate a canonical ensemble (NVT) under a desired temperature. It integrates the following Langevin equation of motion: M X ¨ = − ∇ U ( X ) − γ X ˙ + 2 γ k B T R ( t ) {\displaystyle M{\ddot {\mathbf {X} }}=-\nabla U(\mathbf {X} )-\gamma {\dot {\mathbf {X} }}+{\sqrt {2\gamma k_{B}T}}{\textbf {R}}(t)} − ∇ U ( X ) {\displaystyle -\nabla U(\mathbf {X} )} is the deterministic force term; γ {\displaystyle \gamma } is the friction coefficient and γ X ˙ {\displaystyle \gamma {\dot {X}}} is the friction or damping term; the last term is the random force term ( k B {\displaystyle k_{B}} : Boltzmann constant, T {\displaystyle T} : temperature). This equation allows the system to couple with an imaginary "heat bath": the kinetic energy of the system dissipates from the friction/damping term, and gain from random force/fluctuation; the strength of coupling is controlled by γ {\displaystyle \gamma } . This equation can be simulated with SDE solvers such as Euler–Maruyama method, where the random force term is replaced by a Gaussian random number in every integration step (variance σ 2 = 2 γ k B T / Δ t {\displaystyle \sigma ^{2}=2\gamma k_{B}T/\Delta t} , Δ t {\displaystyle \Delta t} : time step), or Langevin Leapfrog integration, etc. This method is also known as Langevin Integrator. === Langevin Monte Carlo === The overdamped Langevin equation gives d x t = − D k B T ∇ x U ( x t ) d t + 2 D d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=-{\frac {D}{k_{B}T}}\nabla _{\mathbf {x} }U(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2D}}{\rm {d}}W_{t}} Here, D = k B T / γ {\displaystyle D=k_{B}T/\gamma } is the diffusion coefficient from Einstein relation. As proven with Fokker-Planck equation, under appropriate conditions, the stationary distribution of x t {\displaystyle \mathbf {x} _{t}} is Boltzmann distribution p ( x ) ∝ e − U ( x ) / k B T {\displaystyle p(\mathbf {x} )\propto e^{-U(\mathbf {x} )/k_{B}T}} . Since that ∇ log ⁡ p ( x ) = − ∇ U ( x ) / k B T {\displaystyle \nabla \log p(\mathbf {x} )=-\nabla U(\mathbf {x} )/k_{B}T} , this equation is equivalent to the following form: d x t = ϵ ∇ x log ⁡ p ( x t ) d t + 2 ϵ d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2\epsilon }}{\rm {d}}W_{t}} And the distribution of x t ( t → ∞ ) {\displaystyle \mathbf {x} _{t}(t\to \infty )} follows p ( x ) {\displaystyle p(\mathbf {x} )} . In other words, Langevin dynamics drives particles towards a stationary distribution p ( x ) {\displaystyle p(\mathbf {x} )} along a gradient flow, due to the ∇ log ⁡ p ( x ) {\displaystyle \nabla \log p(\mathbf {x} )} term, while still allowing for some random fluctuations. This provides a Markov Chain Monte Carlo method that can be used to sample data x {\displaystyle \mathbf {x} } from a target distribution p ( x ) {\displaystyle p(\mathbf {x} )} , known as Langevin Monte Carlo. In many applications, we have a desired distribution p ( x ) {\displaystyle p(\mathbf {x} )} from which we would like to sample x {\displaystyle \mathbf {x} } , but direct sampling might be challenging or inefficient. Langevin Monte Carlo offers another way to sample x ∼ p ( x ) {\displaystyle \mathbf {x} \sim p(\mathbf {x} )} by sampling a Markov chain in accordance with the Langevin dynamics whose stationary state is p ( x ) {\displaystyle p(\mathbf {x} )} . The Metropolis-adjusted Langevin algorithm (MALA) is an example : Given a current state x t {\displaystyle \mathbf {x} _{t}} , the MALA method proposes a new state x ~ t + 1 {\displaystyle {\tilde {x}}_{t+1}} using the Langevin dynamics above. The proposal is then accepted or rejected based on the Metropolis-Hastings algorithm. The incorporation of the Langevin dynamics in the choice of x ~ t + 1 {\displaystyle {\tilde {x}}_{t+1}} provides greater computational efficiency, since the dynamics drive the particles into regions of higher p ( x ) {\displaystyle p(\mathbf {x} )} probability and are thus more likely to be accepted. Read more in Metropolis-adjusted Langevin algorithm. === Score-based generative model === Langevin dynamics is one of the basis of score-based generative models. From (overdamped) Langevin dynamics, d x t = ϵ ∇ x log ⁡ p ( x t ) d t + 2 ϵ d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2\epsilon }}{\rm {d}}W_{t}} A generative model aims to generate samples that follow (unknown data distribution) p ( x ) {\displaystyle p(\mathbf {x} )} . To achieve that, a score-based model learns an approximate score function s θ ( x ) ≈ ∇ x log ⁡ p ( x ) {\displaystyle \mathbf {s} _{\theta }(\mathbf {x} )\approx \nabla _{\mathbf {x} }\log p(\mathbf {x} )} (a process called score matching). With access to a score function, samples are generated by the following iteration, x i + 1 ← x i + ϵ ∇ x log ⁡ p ( x i ) + 2 ϵ z i , i = 0 , 1 , ⋯ , K {\displaystyle \mathbf {x} _{i+1}\gets \mathbf {x} _{i}+\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{i})+{\sqrt {2\epsilon }}\mathbf {z} _{i},\quad i=0,1,\cdots ,K} with z i ∼ N ( 0 , 1 ) {\displaystyle \mathbf {z} _{i}\sim N(0,1)} . As ϵ → 0 {\displaystyle \epsilon \to 0} and K → ∞ {\displaystyle K\to \infty } , the generated x K {\displaystyle \mathbf {x} _{K}} converge to the target distribution p ( x ) {\displaystyle p(\mathbf {x} )} . Score-based models use s θ ( x ) ≈ ∇ x log ⁡ p ( x ) {\displaystyle \mathbf {s} _{\theta }(\mathbf {x} )\approx \nabla _{\mathbf {x} }\log p(\mathbf {x} )} as an approximation. == Relation to Other Theories == === Klein-Kramers equation === As a stochastic differential equation(SDE), Langevin dynamics equation, has its corresponding partial differential equation(PDE), Klein-Kramers equation, a special Fokker–Planck equation that governs the probability distribution of the particles in the phase space. The original Langevin dynamics equation can be reformulated as the following first order SDEs: d X = P d t {\displaystyle {\rm {d}}\mathbf {X} =\mathbf {P} {\rm {d}}t} d P = − γ P d t − ∇ U ( X ) d t + 2 σ d W ( t ) {\displaystyle {\rm {d}}\mathbf {P} =-\gamma \mathbf {P} {\rm {d}}t-\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} Now consider the following cases and their law of ( X , P ) {\displaystyle (\mathbf {X} ,\mathbf {P} )} : 1. d X = P d t , d P = − γ P d t − ∇ U ( X ) d t + 2 σ d W ( t ) {\displaystyle \mathbf {{\rm {d}}{X}} =\mathbf {P} {\rm {d}}t,\mathbf {{\rm {d}}{P}} =-\gamma \mathbf {P} {\rm {d}}t-\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} with ( X 0 , P 0 ) ∼ ρ 0 {\displaystyle (\mathbf {X} _{0},\mathbf {P} _{0})\sim \rho _{0}} 2. ∂ ρ ∂ t = − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) {\displaystyle {\frac {\partial \rho }{\partial t}}=-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho )} with ρ ( t = 0 , X , P ) = ρ 0 {\displaystyle \rho (t=0,\mathbf {X} ,\mathbf {P} )=\rho _{0}} Consider a general function of momentum and position Ψ t = Ψ ( X , P ) {\displaystyle \Psi _{t}=\Psi (\mathbf {X} ,\mathbf {P} )} The expectation value of the function will be E [ Ψ t ] = ∫ ρ ( t , X , P ) Ψ ( X , P ) d P d X {\displaystyle \mathbb {E} [\Psi _{t}]=\int \rho (t,\mathbf {X} ,\mathbf {P} )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {P} {\rm {d}}\mathbf {X} } Taking derivative with respect to time t {\displaystyle t} , and applying Itô's formula, we have E [ d d t Ψ ( X , P ) ] = E [ ∇ X Ψ d X d t + ∇ P Ψ d P d t + σ T 2 ∇ P 2 Ψ 1 d t ( d W ( t ) ) 2 ] {\displaystyle \mathbb {E} [{\frac {\rm {d}}{{\rm {d}}t}}\Psi (\mathbf {X} ,\mathbf {P} )]=\mathbb {E} [\nabla _{\mathbf {X} }\Psi {\frac {{\rm {d}}\mathbf {X} }{{\rm {d}}t}}+\nabla _{\mathbf {P} }\Psi {\frac {{\rm {d}}\mathbf {P} }{{\rm {d}}t}}+\sigma _{T}^{2}\nabla _{\mathbf {P} }^{2}\Psi {\frac {1}{{\rm {d}}t}}({\rm {d}}\mathbf {W} (t))^{2}]} which can be simplified to ∫ ( ∂ ∂ t ρ ) Ψ ( X , P ) d X d P = E [ ( ∇ X Ψ ) P + ∇ P Ψ ( − γ P − ∇ X U ( X ) ) + σ T 2 ∇ P 2 Ψ ] {\displaystyle \int ({\frac {\partial }{\partial t}}\rho )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} =\mathbb {E} [(\nabla _{\mathbf {X} }\Psi )\mathbf {P} +\nabla _{\mathbf {P} }\Psi (-\gamma \mathbf {P} -\nabla _{\mathbf {X} }U(\mathbf {X} ))+\sigma _{T}^{2}\nabla _{\mathbf {P} }^{2}\Psi ]} Integration by parts on right hand side, due to vanishing density for infinite momentum or velocity we have ( ∂ ∂ t ρ ) Ψ ( X , P ) d X d P = ∫ ( − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) ) Ψ ( X , P ) d X d P {\displaystyle ({\frac {\partial }{\partial t}}\rho )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} =\int (-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho ))\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} } This equation holds for arbitrary $\Psi$, so we require the density to satisfy ∂ ρ ∂ t = − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) {\displaystyle {\frac {\partial \rho }{\partial t}}=-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho )} This equation is called the Klein-Kramers equation, a special version of Fokker Planck equation. It's a partial differential equation that describes the evolution of probability density of the system in the phase space. === Fokker Planck equation === For the overdamped limit, we have d P = 0 {\displaystyle {\rm {d}}\mathbf {P} =0} , so the evolution of system can be reduced to the position subspace. Following similar logic we can prove that the SDE for position, d X = − 1 γ ∇ U ( X ) d t + 2 σ γ R ( t ) d t {\displaystyle {\rm {d}}\mathbf {X} =-{\frac {1}{\gamma }}\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}{\frac {\sigma }{\gamma }}\mathbf {R} (t){\rm {d}}t} corresponds to the Fokker Planck equation for probability density ∂ ρ ( t , X ) ∂ t = ∇ X ( 1 γ ∇ X U ( X ) ρ ( t , X ) ) + Δ X ( σ 2 γ 2 ρ ( t , X ) ) {\displaystyle {\frac {\partial \rho (t,\mathbf {X} )}{\partial t}}=\nabla _{\mathbf {X} }({\frac {1}{\gamma }}\nabla _{\mathbf {X} }U(\mathbf {X} )\rho (t,\mathbf {X} ))+\Delta _{\mathbf {X} }({\frac {\sigma ^{2}}{\gamma ^{2}}}\rho (t,\mathbf {X} ))} === Fluctuation-dissipation theorem === Consider Langevin dynamics of a free particle (i.e. U ( X ) = 0 {\displaystyle U(\mathbf {X} )=0} ), then the equation for momentum will become d P = − 1 γ P d t + 2 σ γ d W t {\displaystyle {\rm {d}}\mathbf {P} =-{\frac {1}{\gamma }}\mathbf {P} {\rm {d}}t+{\frac {{\sqrt {2}}\sigma }{\gamma }}{\rm {d}}\mathbf {W} _{t}} the analytical solution to this SDE is P = P 0 e − t / γ + 2 σ γ ∫ 0 t e − ( t − t ′ ) / γ d W t ′ {\displaystyle \mathbf {P} =\mathbf {P} _{0}e^{-t/\gamma }+{\frac {{\sqrt {2}}\sigma }{\gamma }}\int _{0}^{t}{\rm {e}}^{-(t-t')/\gamma }{\rm {d}}\mathbf {W} _{t}'} thus the average value of second moment of momentum will becomes (here we apply the Itô isometry) E ( P 2 ) = P 0 2 e − 2 t / γ + σ 2 γ ( 1 − e − 2 t / γ ) → t → ∞ σ 2 γ {\displaystyle \mathbb {E} (\mathbf {P} ^{2})=\mathbf {P} _{0}^{2}{\rm {e}}^{-2t/\gamma }+{\frac {\sigma ^{2}}{\gamma }}(1-{\rm {e}}^{-2t/\gamma }){\overset {t\to \infty }{\to }}{\frac {\sigma ^{2}}{\gamma }}} That is, the limiting behavior when time approaches positive infinity, the momentum fluctuation of this system is related to the energy dissipation (friction term parameter γ {\displaystyle \gamma } ) of this system. Combining this result with Equipartition theorem, which relates the average value of kinetic energy of particles with temperature ⟨ v 2 ⟩ = k B T {\displaystyle \langle v^{2}\rangle =k_{\rm {B}}T} we can determines the value of the variance σ {\displaystyle \sigma } in applications like Langevin thermostat. σ 2 / γ = k B T → σ = k B T γ {\displaystyle \sigma ^{2}/\gamma =k_{B}T\to \sigma ={\sqrt {k_{\rm {B}}T\gamma }}} This is consistent with the original definition assuming M = 1 {\displaystyle M=1} . === Path Integral === Path Integral Formulation comes from Quantum Mechanics. But for a Langevin SDE we can also induce a corresponding path integral. Considering the following Overdamped Langevin equation under, where without loss of generality we take γ = σ = 1 {\displaystyle \gamma =\sigma =1} , d X = − ∇ U ( X ) d t + 2 d W t {\displaystyle {\rm {d}}{X}=-\nabla U({X}){\rm {d}}t+{\sqrt {2}}{\rm {d}}W_{t}} Discretize and define t n = n Δ t {\displaystyle t_{n}=n\Delta t} , we get X n + 1 − X n + ∇ U ( X ) Δ t = 2 ( W t n − W t n − 1 ) ∼ N ( 0 , 2 Δ t ) {\displaystyle {X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t={\sqrt {2}}(W_{t_{n}}-W_{t_{n-1}})\sim {\mathcal {N}}(0,2{\sqrt {\Delta t}})} Therefore the propagation probability will be P ( X n + 1 | X n ) = ∫ d ξ 1 2 π Δ t e − ξ 2 4 Δ t δ ( X n + 1 − X n + ∇ U ( X ) Δ t − ξ ) {\displaystyle P({X}_{n+1}|{X}_{n})=\int {\rm {d}}\xi {\frac {1}{2{\sqrt {\pi \Delta t}}}}{\rm {e}}^{-{\frac {\xi ^{2}}{4\Delta t}}}\delta ({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t-\xi )} Applying Fourier Transform of delta function, and we will get P = ∫ d k 2 π e i k ( X n + 1 − X n + ∇ U ( X ) Δ t ) ∫ d ξ 1 2 π Δ t e − ξ 2 4 Δ t e − i k ξ {\displaystyle P=\int {\frac {{\rm {d}}k}{2\pi }}{\rm {e}}^{{\rm {i}}k({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t)}\int {\rm {d}}\xi {\frac {1}{2{\sqrt {\pi \Delta t}}}}{\rm {e}}^{-{\frac {\xi ^{2}}{4\Delta t}}}{\rm {e}}^{-{\rm {i}}k\xi }} The second part is a Gaussian Integral, which yields P = ∫ d k 2 π e i k ( X n + 1 − X n + ∇ U ( X ) Δ t ) e − k 2 Δ t {\displaystyle P=\int {\frac {{\rm {d}}k}{2\pi }}{\rm {e}}^{{\rm {i}}k({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t)}{\rm {e}}^{-k^{2}\Delta t}} Now consider the probability from initial X 0 {\displaystyle X_{0}} to final X n {\displaystyle X_{n}} . P ( X n | X 0 ) = ∫ 1 2 π ∏ i N − 1 d k i e ( i k i ( X ˙ + ∇ U ( X ) ) − k i 2 ) Δ t {\displaystyle P(\mathbf {X} _{n}|\mathbf {X} _{0})=\int {\frac {1}{2\pi }}\prod _{i}^{N-1}{\rm {d}}k_{i}{\rm {e}}^{({\rm {i}}k_{i}({\dot {X}}+\nabla U(X))-k_{i}^{2})\Delta t}} take the limit of Δ t → 0 {\displaystyle \Delta t\to 0} ,we will get P ( X n | X 0 ) = ∫ D [ k ] e ∫ 0 t n ( i k ( X ˙ + ∇ U ( X ) ) − k 2 ) d t {\displaystyle P(\mathbf {X} _{n}|\mathbf {X} _{0})=\int {\mathcal {D}}[k]{\rm {e}}^{\int _{0}^{t_{n}}({\rm {i}}k({\dot {X}}+\nabla U(X))-k^{2}){\rm {d}}t}} == See also == Hamiltonian mechanics Statistical mechanics Implicit solvation Stochastic differential equations Langevin equation Langevin Monte Carlo Klein–Kramers equation == References == == External links == Langevin Dynamics (LD) Simulation
Wikipedia/Langevin_dynamics
In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs. == Most basic scheme == Consider the Itō diffusion X {\displaystyle X} satisfying the following Itō stochastic differential equation d X t = a ( X t ) d t + b ( X t ) d W t , {\displaystyle dX_{t}=a(X_{t})\,dt+b(X_{t})\,dW_{t},} with initial condition X 0 = x 0 {\displaystyle X_{0}=x_{0}} , where W t {\displaystyle W_{t}} stands for the Wiener process, and suppose that we wish to solve this SDE on some interval of time [ 0 , T ] {\displaystyle [0,T]} . Then the basic Runge–Kutta approximation to the true solution X {\displaystyle X} is the Markov chain Y {\displaystyle Y} defined as follows: partition the interval [ 0 , T ] {\displaystyle [0,T]} into N {\displaystyle N} subintervals of width δ = T / N > 0 {\displaystyle \delta =T/N>0} : 0 = τ 0 < τ 1 < ⋯ < τ N = T ; {\displaystyle 0=\tau _{0}<\tau _{1}<\dots <\tau _{N}=T;} set Y 0 := x 0 {\displaystyle Y_{0}:=x_{0}} ; recursively compute Y n {\displaystyle Y_{n}} for 1 ≤ n ≤ N {\displaystyle 1\leq n\leq N} by Y n + 1 := Y n + a ( Y n ) δ + b ( Y n ) Δ W n + 1 2 ( b ( Υ ^ n ) − b ( Y n ) ) ( ( Δ W n ) 2 − δ ) δ − 1 / 2 , {\displaystyle Y_{n+1}:=Y_{n}+a(Y_{n})\delta +b(Y_{n})\Delta W_{n}+{\frac {1}{2}}\left(b({\hat {\Upsilon }}_{n})-b(Y_{n})\right)\left((\Delta W_{n})^{2}-\delta \right)\delta ^{-1/2},} where Δ W n = W τ n + 1 − W τ n {\displaystyle \Delta W_{n}=W_{\tau _{n+1}}-W_{\tau _{n}}} and Υ ^ n = Y n + a ( Y n ) δ + b ( Y n ) δ 1 / 2 . {\displaystyle {\hat {\Upsilon }}_{n}=Y_{n}+a(Y_{n})\delta +b(Y_{n})\delta ^{1/2}.} The random variables Δ W n {\displaystyle \Delta W_{n}} are independent and identically distributed normal random variables with expected value zero and variance δ {\displaystyle \delta } . This scheme has strong order 1, meaning that the approximation error of the actual solution at a fixed time scales with the time step δ {\displaystyle \delta } . It has also weak order 1, meaning that the error on the statistics of the solution scales with the time step δ {\displaystyle \delta } . See the references for complete and exact statements. The functions a {\displaystyle a} and b {\displaystyle b} can be time-varying without any complication. The method can be generalized to the case of several coupled equations; the principle is the same but the equations become longer. == Variation of the Improved Euler is flexible == A newer Runge—Kutta scheme also of strong order 1 straightforwardly reduces to the improved Euler scheme for deterministic ODEs. Consider the vector stochastic process X → ( t ) ∈ R n {\displaystyle {\vec {X}}(t)\in \mathbb {R} ^{n}} that satisfies the general Ito SDE d X → = a → ( t , X → ) d t + b → ( t , X → ) d W , {\displaystyle d{\vec {X}}={\vec {a}}(t,{\vec {X}})\,dt+{\vec {b}}(t,{\vec {X}})\,dW,} where drift a → {\displaystyle {\vec {a}}} and volatility b → {\displaystyle {\vec {b}}} are sufficiently smooth functions of their arguments. Given time step h {\displaystyle h} , and given the value X → ( t k ) = X → k {\displaystyle {\vec {X}}(t_{k})={\vec {X}}_{k}} , estimate X → ( t k + 1 ) {\displaystyle {\vec {X}}(t_{k+1})} by X → k + 1 {\displaystyle {\vec {X}}_{k+1}} for time t k + 1 = t k + h {\displaystyle t_{k+1}=t_{k}+h} via K → 1 = h a → ( t k , X → k ) + ( Δ W k − S k h ) b → ( t k , X → k ) , K → 2 = h a → ( t k + 1 , X → k + K → 1 ) + ( Δ W k + S k h ) b → ( t k + 1 , X → k + K → 1 ) , X → k + 1 = X → k + 1 2 ( K → 1 + K → 2 ) , {\displaystyle {\begin{array}{l}{\vec {K}}_{1}=h{\vec {a}}(t_{k},{\vec {X}}_{k})+(\Delta W_{k}-S_{k}{\sqrt {h}}){\vec {b}}(t_{k},{\vec {X}}_{k}),\\{\vec {K}}_{2}=h{\vec {a}}(t_{k+1},{\vec {X}}_{k}+{\vec {K}}_{1})+(\Delta W_{k}+S_{k}{\sqrt {h}}){\vec {b}}(t_{k+1},{\vec {X}}_{k}+{\vec {K}}_{1}),\\{\vec {X}}_{k+1}={\vec {X}}_{k}+{\frac {1}{2}}({\vec {K}}_{1}+{\vec {K}}_{2}),\end{array}}} where Δ W k = h Z k {\displaystyle \Delta W_{k}={\sqrt {h}}Z_{k}} for normal random Z k ∼ N ( 0 , 1 ) {\displaystyle Z_{k}\sim N(0,1)} ; and where S k = ± 1 {\displaystyle S_{k}=\pm 1} , each alternative chosen with probability 1 / 2 {\displaystyle 1/2} . The above describes only one time step. Repeat this time step ( t m − t 0 ) / h {\displaystyle (t_{m}-t_{0})/h} times in order to integrate the SDE from time t = t 0 {\displaystyle t=t_{0}} to t = t m {\displaystyle t=t_{m}} . The scheme integrates Stratonovich SDEs to O ( h ) {\displaystyle O(h)} provided one sets S k = 0 {\displaystyle S_{k}=0} throughout (instead of choosing ± 1 {\displaystyle \pm 1} ). == Higher order Runge-Kutta schemes == Higher-order schemes also exist, but become increasingly complex. Rößler developed many schemes for Ito SDEs, whereas Komori developed schemes for Stratonovich SDEs. Rackauckas extended these schemes to allow for adaptive-time stepping via Rejection Sampling with Memory (RSwM), resulting in orders of magnitude efficiency increases in practical biological models, along with coefficient optimization for improved stability. == References ==
Wikipedia/Runge–Kutta_method_(SDE)
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically. Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs. Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results. == Overview == Monte Carlo methods vary, but tend to follow a particular pattern: Define a domain of possible inputs. Generate inputs randomly from a probability distribution over the domain. Perform a deterministic computation of the outputs. Aggregate the results. For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is ⁠π/4⁠, the value of π can be approximated using the Monte Carlo method: Draw a square, then inscribe a quadrant within it. Uniformly scatter a given number of points over the square. Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1. The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, ⁠π/4⁠. Multiply the result by 4 to estimate π. In this procedure, the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square, then performing a computation on each input to test whether it falls within the quadrant. Aggregating the results yields our final result, the approximation of π. There are two important considerations: If the points are not uniformly distributed, the approximation will be poor. The approximation improves as more points are randomly placed in the whole square. Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously employed. == Application == Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases). Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods. In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. == Simple Monte Carlo == Suppose one wants to know the expected value μ {\displaystyle \mu } of a population (and knows that μ {\displaystyle \mu } exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ {\displaystyle \mu } by running n {\displaystyle n} simulations and averaging the simulations' results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ {\displaystyle \mu } exists. A sufficiently large n {\displaystyle n} will produce a value for m {\displaystyle m} that is arbitrarily close to μ {\displaystyle \mu } ; more formally, it will be the case that, for any ϵ > 0 {\displaystyle \epsilon >0} , | μ − m | ≤ ϵ {\displaystyle |\mu -m|\leq \epsilon } . Typically, the algorithm to obtain m {\displaystyle m} is s = 0; for i = 1 to n do run the simulation for the ith time, giving result ri; s = s + ri; repeat m = s / n; === An example === Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T {\displaystyle T} . We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable: s = 0; for i = 1 to n do throw the three dice until T is met or first exceeded; ri = the number of throws; s = s + ri; repeat m = s / n; If n {\displaystyle n} is large enough, m {\displaystyle m} will be within ϵ {\displaystyle \epsilon } of μ {\displaystyle \mu } for any ϵ > 0 {\displaystyle \epsilon >0} . === Determining a sufficiently large n === ==== General formula ==== Let ϵ = | μ − m | > 0 {\displaystyle \epsilon =|\mu -m|>0} . Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m {\displaystyle m} is indeed within ϵ {\displaystyle \epsilon } of μ {\displaystyle \mu } . Let z {\displaystyle z} be the z {\displaystyle z} -score corresponding to that confidence level. Let s 2 {\displaystyle s^{2}} be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k {\displaystyle k} of “sample” simulations. Choose a k {\displaystyle k} ; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable." The following algorithm computes s 2 {\displaystyle s^{2}} in one pass while minimizing the possibility that accumulated numerical error produces erroneous results: s1 = 0; run the simulation for the first time, producing result r1; m1 = r1; //mi is the mean of the first i simulations for i = 2 to k do run the simulation for the ith time, producing result ri; δi = ri - mi−1; mi = mi-1 + (1/i)δi; si = si-1 + ((i - 1)/i)(δi)2; repeat s2 = sk/(k - 1); Note that, when the algorithm completes, m k {\displaystyle m_{k}} is the mean of the k {\displaystyle k} results. The value n {\displaystyle n} is sufficiently large when n ≥ s 2 z 2 / ϵ 2 . {\displaystyle n\geq s^{2}z^{2}/\epsilon ^{2}.} If n ≤ k {\displaystyle n\leq k} , then m k = m {\displaystyle m_{k}=m} ; sufficient sample simulations were done to ensure that m k {\displaystyle m_{k}} is within ϵ {\displaystyle \epsilon } of μ {\displaystyle \mu } . If n > k {\displaystyle n>k} , then n {\displaystyle n} simulations can be run “from scratch,” or, since k {\displaystyle k} simulations have already been done, one can just run n − k {\displaystyle n-k} more simulations and add their results into those from the sample simulations: s = mk * k; for i = k + 1 to n do run the simulation for the ith time, giving result ri; s = s + ri; m = s / n; ==== A formula when simulations' results are bounded ==== An alternative formula can be used in the special case where all simulation results are bounded above and below. Choose a value for ϵ {\displaystyle \epsilon } that is twice the maximum allowed difference between μ {\displaystyle \mu } and m {\displaystyle m} . Let 0 < δ < 100 {\displaystyle 0<\delta <100} be the desired confidence level, expressed as a percentage. Let every simulation result r 1 , r 2 , … , r i , … , r n {\displaystyle r_{1},r_{2},\ldots ,r_{i},\ldots ,r_{n}} be such that a ≤ r i ≤ b {\displaystyle a\leq r_{i}\leq b} for finite a {\displaystyle a} and b {\displaystyle b} . To have confidence of at least δ {\displaystyle \delta } that | μ − m | < ϵ / 2 {\displaystyle |\mu -m|<\epsilon /2} , use a value for n {\displaystyle n} such that: n ≥ 2 ( b − a ) 2 ln ⁡ ( 2 / ( 1 − ( δ / 100 ) ) ) / ϵ 2 {\displaystyle n\geq 2(b-a)^{2}\ln(2/(1-(\delta /100)))/\epsilon ^{2}} For example, if δ = 99 % {\displaystyle \delta =99\%} , then n ≥ 2 ( b − a ) 2 ln ⁡ ( 2 / 0.01 ) / ϵ 2 ≈ 10.6 ( b − a ) 2 / ϵ 2 {\displaystyle n\geq 2(b-a)^{2}\ln(2/0.01)/\epsilon ^{2}\approx 10.6(b-a)^{2}/\epsilon ^{2}} . == Computational costs == Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc. == History == Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing). An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which π can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work. In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows: The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations. Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble. Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth. The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism. From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996. Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo. == Definitions == There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior). Here are some examples: Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation. Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation. Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin. Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling." Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic. === Monte Carlo and random numbers === The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known. Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense. What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary. Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation: the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats) the (pseudo-random) number generator produces values that pass tests for randomness there are enough samples to ensure accurate results the proper sampling technique is used the algorithm used is valid for what is being modeled it simulates the phenomenon in question. Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution. Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods. In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers. === Monte Carlo simulation versus "what if" scenarios === There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded. By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events". == Applications == Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include: === Physical sciences === Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting. === Engineering === Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example, In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits. In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis. In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms. In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm. In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process. In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response. In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures. === Climate change and radiative forcing === The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing. === Computational biology === Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes. The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy. Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields). === Computer graphics === Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence. === Applied statistics === The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes: To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions. To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior. To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected). === Artificial intelligence for games === Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves. The Monte Carlo tree search (MCTS) method has four steps: Starting at root node of the tree, select optimal child nodes until a leaf node is reached. Expand the leaf node and choose one of its children. Play a simulated game starting with that node. Use the results of that simulated game to update the node and its ancestors. The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move. Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa. === Design and visuals === Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects. === Search and rescue === The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources. === Finance and business === Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law. Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions. === Law === A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole. === Library science === Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan. === Other === Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one. == Use in mathematics == In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration. === Integration === Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom. Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays 1 / N {\displaystyle \scriptstyle 1/{\sqrt {N}}} convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions. A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm. A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly. Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers. === Simulation and optimization === Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization. The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account. === Inverse problems === Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. === Philosophy === Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich. == See also == == References == === Citations === === Sources === == External links ==
Wikipedia/Monte_Carlo_Method
In mathematics, a filtered algebra is a generalization of the notion of a graded algebra. Examples appear in many branches of mathematics, especially in homological algebra and representation theory. A filtered algebra over the field k {\displaystyle k} is an algebra ( A , ⋅ ) {\displaystyle (A,\cdot )} over k {\displaystyle k} that has an increasing sequence { 0 } ⊆ F 0 ⊆ F 1 ⊆ ⋯ ⊆ F i ⊆ ⋯ ⊆ A {\displaystyle \{0\}\subseteq F_{0}\subseteq F_{1}\subseteq \cdots \subseteq F_{i}\subseteq \cdots \subseteq A} of subspaces of A {\displaystyle A} such that A = ⋃ i ∈ N F i {\displaystyle A=\bigcup _{i\in \mathbb {N} }F_{i}} and that is compatible with the multiplication in the following sense: ∀ m , n ∈ N , F m ⋅ F n ⊆ F n + m . {\displaystyle \forall m,n\in \mathbb {N} ,\quad F_{m}\cdot F_{n}\subseteq F_{n+m}.} == Associated graded algebra == In general, there is the following construction that produces a graded algebra out of a filtered algebra. If A {\displaystyle A} is a filtered algebra, then the associated graded algebra G ( A ) {\displaystyle {\mathcal {G}}(A)} is defined as follows: The multiplication is well-defined and endows G ( A ) {\displaystyle {\mathcal {G}}(A)} with the structure of a graded algebra, with gradation { G n } n ∈ N . {\displaystyle \{G_{n}\}_{n\in \mathbb {N} }.} Furthermore if A {\displaystyle A} is associative then so is G ( A ) {\displaystyle {\mathcal {G}}(A)} . Also, if A {\displaystyle A} is unital, such that the unit lies in F 0 {\displaystyle F_{0}} , then G ( A ) {\displaystyle {\mathcal {G}}(A)} will be unital as well. As algebras A {\displaystyle A} and G ( A ) {\displaystyle {\mathcal {G}}(A)} are distinct (with the exception of the trivial case that A {\displaystyle A} is graded) but as vector spaces they are isomorphic. (One can prove by induction that ⨁ i = 0 n G i {\displaystyle \bigoplus _{i=0}^{n}G_{i}} is isomorphic to F n {\displaystyle F_{n}} as vector spaces). == Examples == Any graded algebra graded by N {\displaystyle \mathbb {N} } , for example A = ⨁ n ∈ N A n {\textstyle A=\bigoplus _{n\in \mathbb {N} }A_{n}} , has a filtration given by F n = ⨁ i = 0 n A i {\textstyle F_{n}=\bigoplus _{i=0}^{n}A_{i}} . An example of a filtered algebra is the Clifford algebra Cliff ⁡ ( V , q ) {\displaystyle \operatorname {Cliff} (V,q)} of a vector space V {\displaystyle V} endowed with a quadratic form q . {\displaystyle q.} The associated graded algebra is ⋀ V {\displaystyle \bigwedge V} , the exterior algebra of V . {\displaystyle V.} The symmetric algebra on the dual of an affine space is a filtered algebra of polynomials; on a vector space, one instead obtains a graded algebra. The universal enveloping algebra of a Lie algebra g {\displaystyle {\mathfrak {g}}} is also naturally filtered. The PBW theorem states that the associated graded algebra is simply S y m ( g ) {\displaystyle \mathrm {Sym} ({\mathfrak {g}})} . Scalar differential operators on a manifold M {\displaystyle M} form a filtered algebra where the filtration is given by the degree of differential operators. The associated graded algebra is the commutative algebra of smooth functions on the cotangent bundle T ∗ M {\displaystyle T^{*}M} which are polynomial along the fibers of the projection π : T ∗ M → M {\displaystyle \pi \colon T^{*}M\rightarrow M} . The group algebra of a group with a length function is a filtered algebra. == See also == Filtration (mathematics) Length function == References == Abe, Eiichi (1980). Hopf Algebras. Cambridge: Cambridge University Press. ISBN 0-521-22240-0. This article incorporates material from Filtered algebra on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Filtration_(abstract_algebra)
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker–Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc. It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917. It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski), and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion. The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov. == One dimension == In one spatial dimension x, for an Itô process driven by the standard Wiener process W t {\displaystyle W_{t}} and described by the stochastic differential equation (SDE) d X t = μ ( X t , t ) d t + σ ( X t , t ) d W t {\displaystyle dX_{t}=\mu (X_{t},t)\,dt+\sigma (X_{t},t)\,dW_{t}} with drift μ ( X t , t ) {\displaystyle \mu (X_{t},t)} and diffusion coefficient D ( X t , t ) = σ 2 ( X t , t ) / 2 {\displaystyle D(X_{t},t)=\sigma ^{2}(X_{t},t)/2} , the Fokker–Planck equation for the probability density p ( x , t ) {\displaystyle p(x,t)} of the random variable X t {\displaystyle X_{t}} is While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation. The stochastic process defined above in the Itô sense can be rewritten within the Stratonovich convention as a Stratonovich SDE: d X t = [ μ ( X t , t ) − 1 2 ∂ ∂ X t D ( X t , t ) ] d t + 2 D ( X t , t ) ∘ d W t . {\displaystyle dX_{t}=\left[\mu (X_{t},t)-{\frac {1}{2}}{\frac {\partial }{\partial X_{t}}}D(X_{t},t)\right]\,dt+{\sqrt {2D(X_{t},t)}}\circ dW_{t}.} It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itô SDE. The zero-drift equation with constant diffusion can be considered as a model of classical Brownian motion: ∂ ∂ t p ( x , t ) = D 0 ∂ 2 ∂ x 2 [ p ( x , t ) ] . {\displaystyle {\frac {\partial }{\partial t}}p(x,t)=D_{0}{\frac {\partial ^{2}}{\partial x^{2}}}\left[p(x,t)\right].} This model has discrete spectrum of solutions if the condition of fixed boundaries is added for { 0 ≤ x ≤ L } {\displaystyle \{0\leq x\leq L\}} : p ( 0 , t ) = p ( L , t ) = 0 , p ( x , 0 ) = p 0 ( x ) . {\displaystyle {\begin{aligned}p(0,t)&=p(L,t)=0,\\p(x,0)&=p_{0}(x).\end{aligned}}} It has been shown that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume: Δ x Δ v ≥ D 0 . {\displaystyle \Delta x\,\Delta v\geq D_{0}.} Here D 0 {\displaystyle D_{0}} is a minimal value of a corresponding diffusion spectrum D j {\displaystyle D_{j}} , while Δ x {\displaystyle \Delta x} and Δ v {\displaystyle \Delta v} represent the uncertainty of coordinate–velocity definition. == Higher dimensions == More generally, if d X t = μ ( X t , t ) d t + σ ( X t , t ) d W t , {\displaystyle d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\,d\mathbf {W} _{t},} where X t {\displaystyle \mathbf {X} _{t}} and μ ( X t , t ) {\displaystyle {\boldsymbol {\mu }}(\mathbf {X} _{t},t)} are N-dimensional vectors, σ ( X t , t ) {\displaystyle {\boldsymbol {\sigma }}(\mathbf {X} _{t},t)} is an N × M {\displaystyle N\times M} matrix and W t {\displaystyle \mathbf {W} _{t}} is an M-dimensional standard Wiener process, the probability density p ( x , t ) {\displaystyle p(\mathbf {x} ,t)} for X t {\displaystyle \mathbf {X} _{t}} satisfies the Fokker–Planck equationwith drift vector μ = ( μ 1 , … , μ N ) {\displaystyle {\boldsymbol {\mu }}=(\mu _{1},\ldots ,\mu _{N})} and diffusion tensor D = 1 2 σ σ T {\textstyle \mathbf {D} ={\frac {1}{2}}{\boldsymbol {\sigma \sigma }}^{\mathsf {T}}} , i.e. D i j ( x , t ) = 1 2 ∑ k = 1 M σ i k ( x , t ) σ j k ( x , t ) . {\displaystyle D_{ij}(\mathbf {x} ,t)={\frac {1}{2}}\sum _{k=1}^{M}\sigma _{ik}(\mathbf {x} ,t)\sigma _{jk}(\mathbf {x} ,t).} If instead of an Itô SDE, a Stratonovich SDE is considered, d X t = μ ( X t , t ) d t + σ ( X t , t ) ∘ d W t , {\displaystyle d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\circ d\mathbf {W} _{t},} the Fokker–Planck equation will read:: 129  ∂ p ( x , t ) ∂ t = − ∑ i = 1 N ∂ ∂ x i [ μ i ( x , t ) p ( x , t ) ] + 1 2 ∑ k = 1 M ∑ i = 1 N ∂ ∂ x i { σ i k ( x , t ) ∑ j = 1 N ∂ ∂ x j [ σ j k ( x , t ) p ( x , t ) ] } {\displaystyle {\frac {\partial p(\mathbf {x} ,t)}{\partial t}}=-\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left[\mu _{i}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]+{\frac {1}{2}}\sum _{k=1}^{M}\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left\{\sigma _{ik}(\mathbf {x} ,t)\sum _{j=1}^{N}{\frac {\partial }{\partial x_{j}}}\left[\sigma _{jk}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]\right\}} == Generalization == In general, the Fokker–Planck equations are a special case to the general Kolmogorov forward equation ∂ t ρ = A ∗ ρ {\displaystyle \partial _{t}\rho ={\mathcal {A}}^{*}\rho } where the linear operator A ∗ {\displaystyle {\mathcal {A}}^{*}} is the Hermitian adjoint to the infinitesimal generator for the Markov process. == Examples == === Wiener process === A standard scalar Wiener process is generated by the stochastic differential equation d X t = d W t . {\displaystyle dX_{t}=dW_{t}.} Here the drift term is zero and the diffusion coefficient is 1/2. Thus the corresponding Fokker–Planck equation is ∂ p ( x , t ) ∂ t = 1 2 ∂ 2 p ( x , t ) ∂ x 2 , {\displaystyle {\frac {\partial p(x,t)}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},} which is the simplest form of a diffusion equation. If the initial condition is p ( x , 0 ) = δ ( x ) {\displaystyle p(x,0)=\delta (x)} , the solution is p ( x , t ) = 1 2 π t e − x 2 / ( 2 t ) . {\displaystyle p(x,t)={\frac {1}{\sqrt {2\pi t}}}e^{-{x^{2}}/({2t})}.} === Boltzmann distribution at the thermodynamic equilibrium === The overdamped Langevin equation d x t = − 1 k B T ( ∇ x U ) d t + d W t {\displaystyle dx_{t}=-{\frac {1}{k_{\text{B}}T}}(\nabla _{x}U)dt+dW_{t}} gives ∂ t p = 1 2 ∇ ⋅ ( p k B T ∇ U + ∇ p ) {\textstyle \partial _{t}p={\frac {1}{2}}\nabla \cdot \left({\frac {p}{k_{\text{B}}T}}\nabla U+\nabla p\right)} . The Boltzmann distribution p ( x ) ∝ e − U ( x ) / k B T {\displaystyle p(x)\propto e^{-U(x)/k_{\text{B}}T}} is an equilibrium distribution, and assuming U {\displaystyle U} grows sufficiently rapidly (that is, the potential well is deep enough to confine the particle), the Boltzmann distribution is the unique equilibrium. === Ornstein–Uhlenbeck process === The Ornstein–Uhlenbeck process is a process defined as d X t = − a X t d t + σ d W t . {\displaystyle dX_{t}=-aX_{t}\,dt+\sigma \,dW_{t}.} with a > 0 {\displaystyle a>0} . Physically, this equation can be motivated as follows: a particle of mass m {\displaystyle m} with velocity V t {\displaystyle V_{t}} moving in a medium, e.g., a fluid, will experience a friction force which resists motion whose magnitude can be approximated as being proportional to particle's velocity − a V t {\displaystyle -aV_{t}} with a = c o n s t a n t {\displaystyle a=\mathrm {constant} } . Other particles in the medium will randomly kick the particle as they collide with it and this effect can be approximated by a white noise term; σ ( d W t / d t ) {\displaystyle \sigma (dW_{t}/dt)} . Newton's second law is written as m d V t d t = − a V t + σ d W t d t . {\displaystyle m{\frac {dV_{t}}{dt}}=-aV_{t}+\sigma {\frac {dW_{t}}{dt}}.} Taking m = 1 {\displaystyle m=1} for simplicity and changing the notation as V t → X t {\displaystyle V_{t}\rightarrow X_{t}} leads to the familiar form d X t = − a X t d t + σ d W t {\displaystyle dX_{t}=-aX_{t}dt+\sigma dW_{t}} . The corresponding Fokker–Planck equation is ∂ p ( x , t ) ∂ t = a ∂ ∂ x ( x p ( x , t ) ) + σ 2 2 ∂ 2 p ( x , t ) ∂ x 2 , {\displaystyle {\frac {\partial p(x,t)}{\partial t}}=a{\frac {\partial }{\partial x}}\left(x\,p(x,t)\right)+{\frac {\sigma ^{2}}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},} The stationary solution ( ∂ t p = 0 {\displaystyle \partial _{t}p=0} ) is p s s ( x ) = a π σ 2 e − a x 2 / σ 2 . {\displaystyle p_{ss}(x)={\sqrt {\frac {a}{\pi \sigma ^{2}}}}e^{-{ax^{2}}/{\sigma ^{2}}}.} === Plasma physics === In plasma physics, the distribution function for a particle species s {\displaystyle s} , p s ( x , v , t ) {\displaystyle p_{s}(\mathbf {x} ,\mathbf {v} ,t)} , takes the place of the probability density function. The corresponding Boltzmann equation is given by ∂ p s ∂ t + v ⋅ ∇ p s + Z s e m s ( E + v × B ) ⋅ ∇ v p s = − ∂ ∂ v i ( p s ⟨ Δ v i ⟩ ) + 1 2 ∂ 2 ∂ v i ∂ v j ( p s ⟨ Δ v i Δ v j ⟩ ) , {\displaystyle {\frac {\partial p_{s}}{\partial t}}+\mathbf {v} \cdot {\boldsymbol {\nabla }}p_{s}+{\frac {Z_{s}e}{m_{s}}}\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)\cdot {\boldsymbol {\nabla }}_{v}p_{s}=-{\frac {\partial }{\partial v_{i}}}\left(p_{s}\langle \Delta v_{i}\rangle \right)+{\frac {1}{2}}{\frac {\partial ^{2}}{\partial v_{i}\,\partial v_{j}}}\left(p_{s}\langle \Delta v_{i}\,\Delta v_{j}\rangle \right),} where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities ⟨ Δ v i ⟩ {\displaystyle \langle \Delta v_{i}\rangle } and ⟨ Δ v i Δ v j ⟩ {\displaystyle \langle \Delta v_{i}\,\Delta v_{j}\rangle } are the average change in velocity a particle of type s {\displaystyle s} experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere. If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation. == Smoluchowski diffusion equation == Consider an overdamped Brownian particle under external force F ( r ) {\displaystyle F(r)} : m r ¨ = − γ r ˙ + F ( r ) + σ ξ ( t ) {\displaystyle m{\ddot {r}}=-\gamma {\dot {r}}+F(r)+\sigma \xi (t)} where the m r ¨ {\displaystyle m{\ddot {r}}} term is negligible (the meaning of "overdamped"). Thus, it is just γ d r = F ( r ) d t + σ d W t {\displaystyle \gamma \,dr=F(r)\,dt+\sigma \,dW_{t}} . The Fokker–Planck equation for this particle is the Smoluchowski diffusion equation: ∂ t P ( r , t | r 0 , t 0 ) = ∇ ⋅ [ D ( ∇ − β F ( r ) ) P ( r , t | r 0 , t 0 ) ] {\displaystyle \partial _{t}P(r,t|r_{0},t_{0})=\nabla \cdot [D(\nabla -\beta F(r))P(r,t|r_{0},t_{0})]} Where D {\displaystyle D} is the diffusion constant and β = 1 k B T {\displaystyle \beta ={\frac {1}{k_{\text{B}}T}}} . The importance of this equation is it allows for both the inclusion of the effect of temperature on the system of particles and a spatially dependent diffusion constant. == Computational considerations == Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability p ( v , t ) d v {\displaystyle p(\mathbf {v} ,t)\,d\mathbf {v} } of the particle having a velocity in the interval ( v , v + d v ) {\displaystyle (\mathbf {v} ,\mathbf {v} +d\mathbf {v} )} when it starts its motion with v 0 {\displaystyle \mathbf {v} _{0}} at time 0. === 1-D linear potential example === Brownian dynamics in one dimension is simple. ==== Theory ==== Starting with a linear potential of the form U ( x ) = c x {\displaystyle U(x)=cx} the corresponding Smoluchowski equation becomes, ∂ t P ( x , t | x 0 , t 0 ) = ∂ x D ( ∂ x + β c ) P ( x , t | x 0 , t 0 ) {\displaystyle \partial _{t}P(x,t|x_{0},t_{0})=\partial _{x}D(\partial _{x}+\beta c)P(x,t|x_{0},t_{0})} Where the diffusion constant, D {\displaystyle D} , is constant over space and time. The boundary conditions are such that the probability vanishes at x → ± ∞ {\displaystyle x\rightarrow \pm \infty } with an initial condition of the ensemble of particles starting in the same place, P ( x , t = t 0 | x 0 , t 0 ) = δ ( x − x 0 ) {\displaystyle P(x,t=t_{0}|x_{0},t_{0})=\delta (x-x_{0})} . Defining τ = D t {\displaystyle \tau =Dt} and b = β c {\displaystyle b=\beta c} and applying the coordinate transformation, y = x + τ b , y 0 = x 0 + τ 0 b {\displaystyle y=x+\tau b,\ \ \ y_{0}=x_{0}+\tau _{0}b} With P ( x , t , | x 0 , t 0 ) = q ( y , τ | y 0 , τ 0 ) {\displaystyle P(x,t,|x_{0},t_{0})=q(y,\tau |y_{0},\tau _{0})} the Smoluchowki equation becomes, ∂ τ q ( y , τ | y 0 , τ 0 ) = ∂ y 2 q ( y , τ | y 0 , τ 0 ) {\displaystyle \partial _{\tau }q(y,\tau |y_{0},\tau _{0})=\partial _{y}^{2}q(y,\tau |y_{0},\tau _{0})} Which is the free diffusion equation with solution, q ( y , τ | y 0 , τ 0 ) = 1 4 π ( τ − τ 0 ) e − ( y − y 0 ) 2 4 ( τ − τ 0 ) {\displaystyle q(y,\tau |y_{0},\tau _{0})={\frac {1}{\sqrt {4\pi (\tau -\tau _{0})}}}e^{-{\frac {(y-y_{0})^{2}}{4(\tau -\tau _{0})}}}} And after transforming back to the original coordinates, P ( x , t | x 0 , t 0 ) = 1 4 π D ( t − t 0 ) exp ⁡ [ − ( x − x 0 + D β c ( t − t 0 ) ) 2 4 D ( t − t 0 ) ] {\displaystyle P(x,t|x_{0},t_{0})={\frac {1}{\sqrt {4\pi D(t-t_{0})}}}\exp {\left[{-{\frac {(x-x_{0}+D\beta c(t-t_{0}))^{2}}{4D(t-t_{0})}}}\right]}} ==== Simulation ==== The simulation on the right was completed using a Brownian dynamics simulation. Starting with a Langevin equation for the system, m x ¨ = − γ x ˙ − c + σ ξ ( t ) {\displaystyle m{\ddot {x}}=-\gamma {\dot {x}}-c+\sigma \xi (t)} where γ {\displaystyle \gamma } is the friction term, ξ {\displaystyle \xi } is a fluctuating force on the particle, and σ {\displaystyle \sigma } is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, | γ x ˙ | ≫ | m x ¨ | {\displaystyle \left|\gamma {\dot {x}}\right|\gg \left|m{\ddot {x}}\right|} . Therefore, the Langevin equation becomes, γ x ˙ = − c + σ ξ ( t ) {\displaystyle \gamma {\dot {x}}=-c+\sigma \xi (t)} For the Brownian dynamic simulation the fluctuation force ξ ( t ) {\displaystyle \xi (t)} is assumed to be Gaussian with the amplitude being dependent of the temperature of the system σ = 2 γ k B T {\textstyle \sigma ={\sqrt {2\gamma k_{\text{B}}T}}} . Rewriting the Langevin equation, d x d t = − D β c + 2 D ξ ( t ) {\displaystyle {\frac {dx}{dt}}=-D\beta c+{\sqrt {2D}}\xi (t)} where D = k B T γ {\textstyle D={\frac {k_{\text{B}}T}{\gamma }}} is the Einstein relation. The integration of this equation was done using the Euler–Maruyama method to numerically approximate the path of this Brownian particle. == Solution == Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. Furthermore, in the case of overdamped dynamics when the Fokker–Planck equation contains second partial derivatives with respect to all spatial variables, the equation can be written in the form of a master equation that can easily be solved numerically. In many applications, one is only interested in the steady-state probability distribution p 0 ( x ) {\displaystyle p_{0}(x)} , which can be found from ∂ p ( x , t ) ∂ t = 0 {\textstyle {\frac {\partial p(x,t)}{\partial t}}=0} . The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation. == Particular cases with known solution and inversion == In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008), and Musiela and Rutkowski (2008). == Fokker–Planck equation and path integral == Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods. This is used, for instance, in critical dynamics. A derivation of the path integral is possible in a similar way as in quantum mechanics. The derivation for a Fokker–Planck equation with one variable x {\displaystyle x} is as follows. Start by inserting a delta function and then integrating by parts: ∂ ∂ t p ( x ′ , t ) = − ∂ ∂ x ′ [ D 1 ( x ′ , t ) p ( x ′ , t ) ] + ∂ 2 ∂ x ′ 2 [ D 2 ( x ′ , t ) p ( x ′ , t ) ] = ∫ − ∞ ∞ d x [ ( D 1 ( x , t ) ∂ ∂ x + D 2 ( x , t ) ∂ 2 ∂ x 2 ) δ ( x ′ − x ) ] p ( x , t ) . {\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}p{\left(x',t\right)}&=-{\frac {\partial }{\partial x'}}\left[D_{1}(x',t)p(x',t)\right]+{\frac {\partial ^{2}}{\partial {x'}^{2}}}\left[D_{2}(x',t)p(x',t)\right]\\[1ex]&=\int _{-\infty }^{\infty }dx\left[\left(D_{1}{\left(x,t\right)}{\frac {\partial }{\partial x}}+D_{2}{\left(x,t\right)}{\frac {\partial ^{2}}{\partial x^{2}}}\right)\delta {\left(x'-x\right)}\right]p(x,t).\end{aligned}}} The x {\displaystyle x} -derivatives here only act on the δ {\displaystyle \delta } -function, not on p ( x , t ) {\displaystyle p(x,t)} . Integrate over a time interval ε {\displaystyle \varepsilon } , p ( x ′ , t + ε ) = ∫ − ∞ ∞ d x ( ( 1 + ε [ D 1 ( x , t ) ∂ ∂ x + D 2 ( x , t ) ∂ 2 ∂ x 2 ] ) δ ( x ′ − x ) ) p ( x , t ) + O ( ε 2 ) . {\displaystyle p(x',t+\varepsilon )=\int _{-\infty }^{\infty }\,\mathrm {d} x\left(\left(1+\varepsilon \left[D_{1}(x,t){\frac {\partial }{\partial x}}+D_{2}(x,t){\frac {\partial ^{2}}{\partial x^{2}}}\right]\right)\delta (x'-x)\right)p(x,t)+O(\varepsilon ^{2}).} Insert the Fourier integral δ ( x ′ − x ) = ∫ − i ∞ i ∞ d x ~ 2 π i e x ~ ( x − x ′ ) {\displaystyle \delta {\left(x'-x\right)}=\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}e^{{\tilde {x}}{\left(x-x'\right)}}} for the δ {\displaystyle \delta } -function, p ( x ′ , t + ε ) = ∫ − ∞ ∞ d x ∫ − i ∞ i ∞ d x ~ 2 π i ( 1 + ε [ x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) ] ) e x ~ ( x − x ′ ) p ( x , t ) + O ( ε 2 ) = ∫ − ∞ ∞ d x ∫ − i ∞ i ∞ d x ~ 2 π i exp ⁡ ( ε [ − x ~ ( x ′ − x ) ε + x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) ] ) p ( x , t ) + O ( ε 2 ) . {\displaystyle {\begin{aligned}p(x',t+\varepsilon )&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\left(1+\varepsilon \left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)e^{{\tilde {x}}(x-x')}p(x,t)+O(\varepsilon ^{2})\\[5pt]&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\exp \left(\varepsilon \left[-{\tilde {x}}{\frac {(x'-x)}{\varepsilon }}+{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)p(x,t)+O(\varepsilon ^{2}).\end{aligned}}} This equation expresses p ( x ′ , t + ε ) {\displaystyle p(x',t+\varepsilon )} as functional of p ( x , t ) {\displaystyle p(x,t)} . Iterating ( t ′ − t ) / ε {\displaystyle (t'-t)/\varepsilon } times and performing the limit ε → 0 {\displaystyle \varepsilon \rightarrow 0} gives a path integral with action S = ∫ d t [ x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) − x ~ ∂ x ∂ t ] . {\displaystyle S=\int \mathrm {d} t\left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)-{\tilde {x}}{\frac {\partial x}{\partial t}}\right].} The variables x ~ {\displaystyle {\tilde {x}}} conjugate to x {\displaystyle x} are called "response variables". Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation. == See also == == Notes and references == == Further reading == Frank, Till Daniel (2005). Nonlinear Fokker–Planck Equations: Fundamentals and Applications. Springer Series in Synergetics. Springer. ISBN 3-540-21264-7. Gardiner, Crispin (2009). Stochastic Methods (4th ed.). Springer. ISBN 978-3-540-70712-7. Pavliotis, Grigorios A. (2014). Stochastic Processes and Applications: Diffusion Processes, the Fokker–Planck and Langevin Equations. Springer Texts in Applied Mathematics. Springer. ISBN 978-1-4939-1322-0. Risken, Hannes (1996). The Fokker–Planck Equation: Methods of Solutions and Applications. Springer Series in Synergetics (2nd ed.). Springer. ISBN 3-540-61530-X.
Wikipedia/Fokker–Planck_equation
The Bachelier model is a model of an asset price under Brownian motion presented by Louis Bachelier on his PhD thesis The Theory of Speculation (Théorie de la spéculation, published 1900). It is also called "Normal Model" equivalently (as opposed to "Log-Normal Model" or "Black-Scholes Model"). One early criticism of the Bachelier model is that the probability distribution which he chose to use to describe stock prices allowed for negative prices. (His doctoral dissertation was graded down because of that feature.) The (much) later Black-Scholes-(Merton) Model addresses that issue by positing stock prices as following a log-normal distribution which does not allow negative values. This in turn, implies that returns follow a normal distribution. On April 8, 2020, the CME Group posted the note CME Clearing Plan to Address the Potential of a Negative Underlying in Certain Energy Options Contracts, saying that after a threshold on price, it would change its standard energy options model from one based on Geometric Brownian Motion and the Black–Scholes model to the Bachelier model. On April 20, 2020, oil futures reached negative values for the first time in history, where Bachelier model took an important role in option pricing and risk management. The European analytic formula for this model based on a risk neutral argument is derived in Analytic Formula for the European Normal Black Scholes Formula (Kazuhiro Iwasawa, New York University, December 2, 2001). The implied volatility under the Bachelier model can be obtained by an accurate numerical approximation. For an extensive review of the Bachelier model, see the review paper, A Black-Scholes User's Guide to the Bachelier Model , which summarizes the results on volatility conversion, risk management, stochastic volatility, and barrier options pricing to facilitate the model transition. The paper also connects the Black-Scholes and Bachelier models by using the displaced Black-Scholes model as a model family. == References ==
Wikipedia/Bachelier_model
In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori Milstein who first published it in 1974. == Description == Consider the autonomous Itō stochastic differential equation: d X t = a ( X t ) d t + b ( X t ) d W t {\displaystyle \mathrm {d} X_{t}=a(X_{t})\,\mathrm {d} t+b(X_{t})\,\mathrm {d} W_{t}} with initial condition X 0 = x 0 {\displaystyle X_{0}=x_{0}} , where W t {\displaystyle W_{t}} denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time [ 0 , T ] {\displaystyle [0,T]} . Then the Milstein approximation to the true solution X {\displaystyle X} is the Markov chain Y {\displaystyle Y} defined as follows: Partition the interval [ 0 , T ] {\displaystyle [0,T]} into N {\displaystyle N} equal subintervals of width Δ t > 0 {\displaystyle \Delta t>0} : 0 = τ 0 < τ 1 < ⋯ < τ N = T with τ n := n Δ t and Δ t = T N {\displaystyle 0=\tau _{0}<\tau _{1}<\dots <\tau _{N}=T{\text{ with }}\tau _{n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}} Set Y 0 = x 0 ; {\displaystyle Y_{0}=x_{0};} Recursively define Y n {\displaystyle Y_{n}} for 1 ≤ n ≤ N {\displaystyle 1\leq n\leq N} by: Y n + 1 = Y n + a ( Y n ) Δ t + b ( Y n ) Δ W n + 1 2 b ( Y n ) b ′ ( Y n ) ( ( Δ W n ) 2 − Δ t ) {\displaystyle Y_{n+1}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)} where b ′ {\displaystyle b'} denotes the derivative of b ( x ) {\displaystyle b(x)} with respect to x {\displaystyle x} and: Δ W n = W τ n + 1 − W τ n {\displaystyle \Delta W_{n}=W_{\tau _{n+1}}-W_{\tau _{n}}} are independent and identically distributed normal random variables with expected value zero and variance Δ t {\displaystyle \Delta t} . Then Y n {\displaystyle Y_{n}} will approximate X τ n {\displaystyle X_{\tau _{n}}} for 0 ≤ n ≤ N {\displaystyle 0\leq n\leq N} , and increasing N {\displaystyle N} will yield a better approximation. Note that when b ′ ( Y n ) = 0 {\displaystyle b'(Y_{n})=0} (i.e. the diffusion term does not depend on X t {\displaystyle X_{t}} ) this method is equivalent to the Euler–Maruyama method. The Milstein scheme has both weak and strong order of convergence Δ t {\displaystyle \Delta t} which is superior to the Euler–Maruyama method, which in turn has the same weak order of convergence Δ t {\displaystyle \Delta t} but inferior strong order of convergence Δ t {\displaystyle {\sqrt {\Delta t}}} . == Intuitive derivation == For this derivation, we will only look at geometric Brownian motion (GBM), the stochastic differential equation of which is given by: d X t = μ X d t + σ X d W t {\displaystyle \mathrm {d} X_{t}=\mu X\mathrm {d} t+\sigma XdW_{t}} with real constants μ {\displaystyle \mu } and σ {\displaystyle \sigma } . Using Itō's lemma we get: d ln ⁡ X t = ( μ − 1 2 σ 2 ) d t + σ d W t {\displaystyle \mathrm {d} \ln X_{t}=\left(\mu -{\frac {1}{2}}\sigma ^{2}\right)\mathrm {d} t+\sigma \mathrm {d} W_{t}} Thus, the solution to the GBM SDE is: X t + Δ t = X t exp ⁡ { ∫ t t + Δ t ( μ − 1 2 σ 2 ) d t + ∫ t t + Δ t σ d W u } ≈ X t ( 1 + μ Δ t − 1 2 σ 2 Δ t + σ Δ W t + 1 2 σ 2 ( Δ W t ) 2 ) = X t + a ( X t ) Δ t + b ( X t ) Δ W t + 1 2 b ( X t ) b ′ ( X t ) ( ( Δ W t ) 2 − Δ t ) {\displaystyle {\begin{aligned}X_{t+\Delta t}&=X_{t}\exp \left\{\int _{t}^{t+\Delta t}\left(\mu -{\frac {1}{2}}\sigma ^{2}\right)\mathrm {d} t+\int _{t}^{t+\Delta t}\sigma \mathrm {d} W_{u}\right\}\\&\approx X_{t}\left(1+\mu \Delta t-{\frac {1}{2}}\sigma ^{2}\Delta t+\sigma \Delta W_{t}+{\frac {1}{2}}\sigma ^{2}(\Delta W_{t})^{2}\right)\\&=X_{t}+a(X_{t})\Delta t+b(X_{t})\Delta W_{t}+{\frac {1}{2}}b(X_{t})b'(X_{t})((\Delta W_{t})^{2}-\Delta t)\end{aligned}}} where a ( x ) = μ x , b ( x ) = σ x {\displaystyle a(x)=\mu x,~b(x)=\sigma x} The numerical solution is presented in the graphic for three different trajectories. === Computer implementation === The following Python code implements the Milstein method and uses it to solve the SDE describing geometric Brownian motion defined by { d Y t = μ Y d t + σ Y d W t Y 0 = Y init {\displaystyle {\begin{cases}dY_{t}=\mu Y\,{\mathrm {d} }t+\sigma Y\,{\mathrm {d} }W_{t}\\Y_{0}=Y_{\text{init}}\end{cases}}} == See also == Euler–Maruyama method == References == == Further reading == Kloeden, P.E., & Platen, E. (1999). Numerical Solution of Stochastic Differential Equations. Springer, Berlin. ISBN 3-540-54062-8.{{cite book}}: CS1 maint: multiple names: authors list (link)
Wikipedia/Milstein_method
A backward stochastic differential equation (BSDE) is a stochastic differential equation with a terminal condition in which the solution is required to be adapted with respect to an underlying filtration. BSDEs naturally arise in various applications such as stochastic control, mathematical finance, and nonlinear Feynman-Kac formula. == Background == Backward stochastic differential equations were introduced by Jean-Michel Bismut in 1973 in the linear case and by Étienne Pardoux and Shige Peng in 1990 in the nonlinear case. == Mathematical framework == Fix a terminal time T > 0 {\displaystyle T>0} and a probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} . Let ( B t ) t ∈ [ 0 , T ] {\displaystyle (B_{t})_{t\in [0,T]}} be a Brownian motion with natural filtration ( F t ) t ∈ [ 0 , T ] {\displaystyle ({\mathcal {F}}_{t})_{t\in [0,T]}} . A backward stochastic differential equation is an integral equation of the type where f : [ 0 , T ] × R × R → R {\displaystyle f:[0,T]\times \mathbb {R} \times \mathbb {R} \to \mathbb {R} } is called the generator of the BSDE, the terminal condition ξ {\displaystyle \xi } is an F T {\displaystyle {\mathcal {F}}_{T}} -measurable random variable, and the solution ( Y t , Z t ) t ∈ [ 0 , T ] {\displaystyle (Y_{t},Z_{t})_{t\in [0,T]}} consists of stochastic processes ( Y t ) t ∈ [ 0 , T ] {\displaystyle (Y_{t})_{t\in [0,T]}} and ( Z t ) t ∈ [ 0 , T ] {\displaystyle (Z_{t})_{t\in [0,T]}} which are adapted to the filtration ( F t ) t ∈ [ 0 , T ] {\displaystyle ({\mathcal {F}}_{t})_{t\in [0,T]}} . === Example === In the case f ≡ 0 {\displaystyle f\equiv 0} , the BSDE (1) reduces to If ξ ∈ L 2 ( Ω , P ) {\displaystyle \xi \in L^{2}(\Omega ,\mathbb {P} )} , then it follows from the martingale representation theorem, that there exists a unique stochastic process ( Z t ) t ∈ [ 0 , T ] {\displaystyle (Z_{t})_{t\in [0,T]}} such that Y t = E [ ξ | F t ] {\displaystyle Y_{t}=\mathbb {E} [\xi |{\mathcal {F}}_{t}]} and Z t {\displaystyle Z_{t}} satisfy the BSDE (2). == Numerical Method == Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics problems. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. == See also == Martingale representation theorem Stochastic control Stochastic differential equation == References == == Further reading == Pardoux, Etienne; Rӑşcanu, Aurel (2014). Stochastic Differential Equations, Backward SDEs, Partial Differential Equations. Stochastic modeling and applied probability. Springer International Publishing Switzerland. Zhang, Jianfeng (2017). Backward stochastic differential equations. Probability theory and stochastic modeling. Springer New York, NY.
Wikipedia/Backward_stochastic_differential_equation
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker–Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc. It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917. It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski), and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion. The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov. == One dimension == In one spatial dimension x, for an Itô process driven by the standard Wiener process W t {\displaystyle W_{t}} and described by the stochastic differential equation (SDE) d X t = μ ( X t , t ) d t + σ ( X t , t ) d W t {\displaystyle dX_{t}=\mu (X_{t},t)\,dt+\sigma (X_{t},t)\,dW_{t}} with drift μ ( X t , t ) {\displaystyle \mu (X_{t},t)} and diffusion coefficient D ( X t , t ) = σ 2 ( X t , t ) / 2 {\displaystyle D(X_{t},t)=\sigma ^{2}(X_{t},t)/2} , the Fokker–Planck equation for the probability density p ( x , t ) {\displaystyle p(x,t)} of the random variable X t {\displaystyle X_{t}} is While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation. The stochastic process defined above in the Itô sense can be rewritten within the Stratonovich convention as a Stratonovich SDE: d X t = [ μ ( X t , t ) − 1 2 ∂ ∂ X t D ( X t , t ) ] d t + 2 D ( X t , t ) ∘ d W t . {\displaystyle dX_{t}=\left[\mu (X_{t},t)-{\frac {1}{2}}{\frac {\partial }{\partial X_{t}}}D(X_{t},t)\right]\,dt+{\sqrt {2D(X_{t},t)}}\circ dW_{t}.} It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itô SDE. The zero-drift equation with constant diffusion can be considered as a model of classical Brownian motion: ∂ ∂ t p ( x , t ) = D 0 ∂ 2 ∂ x 2 [ p ( x , t ) ] . {\displaystyle {\frac {\partial }{\partial t}}p(x,t)=D_{0}{\frac {\partial ^{2}}{\partial x^{2}}}\left[p(x,t)\right].} This model has discrete spectrum of solutions if the condition of fixed boundaries is added for { 0 ≤ x ≤ L } {\displaystyle \{0\leq x\leq L\}} : p ( 0 , t ) = p ( L , t ) = 0 , p ( x , 0 ) = p 0 ( x ) . {\displaystyle {\begin{aligned}p(0,t)&=p(L,t)=0,\\p(x,0)&=p_{0}(x).\end{aligned}}} It has been shown that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume: Δ x Δ v ≥ D 0 . {\displaystyle \Delta x\,\Delta v\geq D_{0}.} Here D 0 {\displaystyle D_{0}} is a minimal value of a corresponding diffusion spectrum D j {\displaystyle D_{j}} , while Δ x {\displaystyle \Delta x} and Δ v {\displaystyle \Delta v} represent the uncertainty of coordinate–velocity definition. == Higher dimensions == More generally, if d X t = μ ( X t , t ) d t + σ ( X t , t ) d W t , {\displaystyle d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\,d\mathbf {W} _{t},} where X t {\displaystyle \mathbf {X} _{t}} and μ ( X t , t ) {\displaystyle {\boldsymbol {\mu }}(\mathbf {X} _{t},t)} are N-dimensional vectors, σ ( X t , t ) {\displaystyle {\boldsymbol {\sigma }}(\mathbf {X} _{t},t)} is an N × M {\displaystyle N\times M} matrix and W t {\displaystyle \mathbf {W} _{t}} is an M-dimensional standard Wiener process, the probability density p ( x , t ) {\displaystyle p(\mathbf {x} ,t)} for X t {\displaystyle \mathbf {X} _{t}} satisfies the Fokker–Planck equationwith drift vector μ = ( μ 1 , … , μ N ) {\displaystyle {\boldsymbol {\mu }}=(\mu _{1},\ldots ,\mu _{N})} and diffusion tensor D = 1 2 σ σ T {\textstyle \mathbf {D} ={\frac {1}{2}}{\boldsymbol {\sigma \sigma }}^{\mathsf {T}}} , i.e. D i j ( x , t ) = 1 2 ∑ k = 1 M σ i k ( x , t ) σ j k ( x , t ) . {\displaystyle D_{ij}(\mathbf {x} ,t)={\frac {1}{2}}\sum _{k=1}^{M}\sigma _{ik}(\mathbf {x} ,t)\sigma _{jk}(\mathbf {x} ,t).} If instead of an Itô SDE, a Stratonovich SDE is considered, d X t = μ ( X t , t ) d t + σ ( X t , t ) ∘ d W t , {\displaystyle d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\circ d\mathbf {W} _{t},} the Fokker–Planck equation will read:: 129  ∂ p ( x , t ) ∂ t = − ∑ i = 1 N ∂ ∂ x i [ μ i ( x , t ) p ( x , t ) ] + 1 2 ∑ k = 1 M ∑ i = 1 N ∂ ∂ x i { σ i k ( x , t ) ∑ j = 1 N ∂ ∂ x j [ σ j k ( x , t ) p ( x , t ) ] } {\displaystyle {\frac {\partial p(\mathbf {x} ,t)}{\partial t}}=-\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left[\mu _{i}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]+{\frac {1}{2}}\sum _{k=1}^{M}\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left\{\sigma _{ik}(\mathbf {x} ,t)\sum _{j=1}^{N}{\frac {\partial }{\partial x_{j}}}\left[\sigma _{jk}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]\right\}} == Generalization == In general, the Fokker–Planck equations are a special case to the general Kolmogorov forward equation ∂ t ρ = A ∗ ρ {\displaystyle \partial _{t}\rho ={\mathcal {A}}^{*}\rho } where the linear operator A ∗ {\displaystyle {\mathcal {A}}^{*}} is the Hermitian adjoint to the infinitesimal generator for the Markov process. == Examples == === Wiener process === A standard scalar Wiener process is generated by the stochastic differential equation d X t = d W t . {\displaystyle dX_{t}=dW_{t}.} Here the drift term is zero and the diffusion coefficient is 1/2. Thus the corresponding Fokker–Planck equation is ∂ p ( x , t ) ∂ t = 1 2 ∂ 2 p ( x , t ) ∂ x 2 , {\displaystyle {\frac {\partial p(x,t)}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},} which is the simplest form of a diffusion equation. If the initial condition is p ( x , 0 ) = δ ( x ) {\displaystyle p(x,0)=\delta (x)} , the solution is p ( x , t ) = 1 2 π t e − x 2 / ( 2 t ) . {\displaystyle p(x,t)={\frac {1}{\sqrt {2\pi t}}}e^{-{x^{2}}/({2t})}.} === Boltzmann distribution at the thermodynamic equilibrium === The overdamped Langevin equation d x t = − 1 k B T ( ∇ x U ) d t + d W t {\displaystyle dx_{t}=-{\frac {1}{k_{\text{B}}T}}(\nabla _{x}U)dt+dW_{t}} gives ∂ t p = 1 2 ∇ ⋅ ( p k B T ∇ U + ∇ p ) {\textstyle \partial _{t}p={\frac {1}{2}}\nabla \cdot \left({\frac {p}{k_{\text{B}}T}}\nabla U+\nabla p\right)} . The Boltzmann distribution p ( x ) ∝ e − U ( x ) / k B T {\displaystyle p(x)\propto e^{-U(x)/k_{\text{B}}T}} is an equilibrium distribution, and assuming U {\displaystyle U} grows sufficiently rapidly (that is, the potential well is deep enough to confine the particle), the Boltzmann distribution is the unique equilibrium. === Ornstein–Uhlenbeck process === The Ornstein–Uhlenbeck process is a process defined as d X t = − a X t d t + σ d W t . {\displaystyle dX_{t}=-aX_{t}\,dt+\sigma \,dW_{t}.} with a > 0 {\displaystyle a>0} . Physically, this equation can be motivated as follows: a particle of mass m {\displaystyle m} with velocity V t {\displaystyle V_{t}} moving in a medium, e.g., a fluid, will experience a friction force which resists motion whose magnitude can be approximated as being proportional to particle's velocity − a V t {\displaystyle -aV_{t}} with a = c o n s t a n t {\displaystyle a=\mathrm {constant} } . Other particles in the medium will randomly kick the particle as they collide with it and this effect can be approximated by a white noise term; σ ( d W t / d t ) {\displaystyle \sigma (dW_{t}/dt)} . Newton's second law is written as m d V t d t = − a V t + σ d W t d t . {\displaystyle m{\frac {dV_{t}}{dt}}=-aV_{t}+\sigma {\frac {dW_{t}}{dt}}.} Taking m = 1 {\displaystyle m=1} for simplicity and changing the notation as V t → X t {\displaystyle V_{t}\rightarrow X_{t}} leads to the familiar form d X t = − a X t d t + σ d W t {\displaystyle dX_{t}=-aX_{t}dt+\sigma dW_{t}} . The corresponding Fokker–Planck equation is ∂ p ( x , t ) ∂ t = a ∂ ∂ x ( x p ( x , t ) ) + σ 2 2 ∂ 2 p ( x , t ) ∂ x 2 , {\displaystyle {\frac {\partial p(x,t)}{\partial t}}=a{\frac {\partial }{\partial x}}\left(x\,p(x,t)\right)+{\frac {\sigma ^{2}}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},} The stationary solution ( ∂ t p = 0 {\displaystyle \partial _{t}p=0} ) is p s s ( x ) = a π σ 2 e − a x 2 / σ 2 . {\displaystyle p_{ss}(x)={\sqrt {\frac {a}{\pi \sigma ^{2}}}}e^{-{ax^{2}}/{\sigma ^{2}}}.} === Plasma physics === In plasma physics, the distribution function for a particle species s {\displaystyle s} , p s ( x , v , t ) {\displaystyle p_{s}(\mathbf {x} ,\mathbf {v} ,t)} , takes the place of the probability density function. The corresponding Boltzmann equation is given by ∂ p s ∂ t + v ⋅ ∇ p s + Z s e m s ( E + v × B ) ⋅ ∇ v p s = − ∂ ∂ v i ( p s ⟨ Δ v i ⟩ ) + 1 2 ∂ 2 ∂ v i ∂ v j ( p s ⟨ Δ v i Δ v j ⟩ ) , {\displaystyle {\frac {\partial p_{s}}{\partial t}}+\mathbf {v} \cdot {\boldsymbol {\nabla }}p_{s}+{\frac {Z_{s}e}{m_{s}}}\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)\cdot {\boldsymbol {\nabla }}_{v}p_{s}=-{\frac {\partial }{\partial v_{i}}}\left(p_{s}\langle \Delta v_{i}\rangle \right)+{\frac {1}{2}}{\frac {\partial ^{2}}{\partial v_{i}\,\partial v_{j}}}\left(p_{s}\langle \Delta v_{i}\,\Delta v_{j}\rangle \right),} where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities ⟨ Δ v i ⟩ {\displaystyle \langle \Delta v_{i}\rangle } and ⟨ Δ v i Δ v j ⟩ {\displaystyle \langle \Delta v_{i}\,\Delta v_{j}\rangle } are the average change in velocity a particle of type s {\displaystyle s} experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere. If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation. == Smoluchowski diffusion equation == Consider an overdamped Brownian particle under external force F ( r ) {\displaystyle F(r)} : m r ¨ = − γ r ˙ + F ( r ) + σ ξ ( t ) {\displaystyle m{\ddot {r}}=-\gamma {\dot {r}}+F(r)+\sigma \xi (t)} where the m r ¨ {\displaystyle m{\ddot {r}}} term is negligible (the meaning of "overdamped"). Thus, it is just γ d r = F ( r ) d t + σ d W t {\displaystyle \gamma \,dr=F(r)\,dt+\sigma \,dW_{t}} . The Fokker–Planck equation for this particle is the Smoluchowski diffusion equation: ∂ t P ( r , t | r 0 , t 0 ) = ∇ ⋅ [ D ( ∇ − β F ( r ) ) P ( r , t | r 0 , t 0 ) ] {\displaystyle \partial _{t}P(r,t|r_{0},t_{0})=\nabla \cdot [D(\nabla -\beta F(r))P(r,t|r_{0},t_{0})]} Where D {\displaystyle D} is the diffusion constant and β = 1 k B T {\displaystyle \beta ={\frac {1}{k_{\text{B}}T}}} . The importance of this equation is it allows for both the inclusion of the effect of temperature on the system of particles and a spatially dependent diffusion constant. == Computational considerations == Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability p ( v , t ) d v {\displaystyle p(\mathbf {v} ,t)\,d\mathbf {v} } of the particle having a velocity in the interval ( v , v + d v ) {\displaystyle (\mathbf {v} ,\mathbf {v} +d\mathbf {v} )} when it starts its motion with v 0 {\displaystyle \mathbf {v} _{0}} at time 0. === 1-D linear potential example === Brownian dynamics in one dimension is simple. ==== Theory ==== Starting with a linear potential of the form U ( x ) = c x {\displaystyle U(x)=cx} the corresponding Smoluchowski equation becomes, ∂ t P ( x , t | x 0 , t 0 ) = ∂ x D ( ∂ x + β c ) P ( x , t | x 0 , t 0 ) {\displaystyle \partial _{t}P(x,t|x_{0},t_{0})=\partial _{x}D(\partial _{x}+\beta c)P(x,t|x_{0},t_{0})} Where the diffusion constant, D {\displaystyle D} , is constant over space and time. The boundary conditions are such that the probability vanishes at x → ± ∞ {\displaystyle x\rightarrow \pm \infty } with an initial condition of the ensemble of particles starting in the same place, P ( x , t = t 0 | x 0 , t 0 ) = δ ( x − x 0 ) {\displaystyle P(x,t=t_{0}|x_{0},t_{0})=\delta (x-x_{0})} . Defining τ = D t {\displaystyle \tau =Dt} and b = β c {\displaystyle b=\beta c} and applying the coordinate transformation, y = x + τ b , y 0 = x 0 + τ 0 b {\displaystyle y=x+\tau b,\ \ \ y_{0}=x_{0}+\tau _{0}b} With P ( x , t , | x 0 , t 0 ) = q ( y , τ | y 0 , τ 0 ) {\displaystyle P(x,t,|x_{0},t_{0})=q(y,\tau |y_{0},\tau _{0})} the Smoluchowki equation becomes, ∂ τ q ( y , τ | y 0 , τ 0 ) = ∂ y 2 q ( y , τ | y 0 , τ 0 ) {\displaystyle \partial _{\tau }q(y,\tau |y_{0},\tau _{0})=\partial _{y}^{2}q(y,\tau |y_{0},\tau _{0})} Which is the free diffusion equation with solution, q ( y , τ | y 0 , τ 0 ) = 1 4 π ( τ − τ 0 ) e − ( y − y 0 ) 2 4 ( τ − τ 0 ) {\displaystyle q(y,\tau |y_{0},\tau _{0})={\frac {1}{\sqrt {4\pi (\tau -\tau _{0})}}}e^{-{\frac {(y-y_{0})^{2}}{4(\tau -\tau _{0})}}}} And after transforming back to the original coordinates, P ( x , t | x 0 , t 0 ) = 1 4 π D ( t − t 0 ) exp ⁡ [ − ( x − x 0 + D β c ( t − t 0 ) ) 2 4 D ( t − t 0 ) ] {\displaystyle P(x,t|x_{0},t_{0})={\frac {1}{\sqrt {4\pi D(t-t_{0})}}}\exp {\left[{-{\frac {(x-x_{0}+D\beta c(t-t_{0}))^{2}}{4D(t-t_{0})}}}\right]}} ==== Simulation ==== The simulation on the right was completed using a Brownian dynamics simulation. Starting with a Langevin equation for the system, m x ¨ = − γ x ˙ − c + σ ξ ( t ) {\displaystyle m{\ddot {x}}=-\gamma {\dot {x}}-c+\sigma \xi (t)} where γ {\displaystyle \gamma } is the friction term, ξ {\displaystyle \xi } is a fluctuating force on the particle, and σ {\displaystyle \sigma } is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, | γ x ˙ | ≫ | m x ¨ | {\displaystyle \left|\gamma {\dot {x}}\right|\gg \left|m{\ddot {x}}\right|} . Therefore, the Langevin equation becomes, γ x ˙ = − c + σ ξ ( t ) {\displaystyle \gamma {\dot {x}}=-c+\sigma \xi (t)} For the Brownian dynamic simulation the fluctuation force ξ ( t ) {\displaystyle \xi (t)} is assumed to be Gaussian with the amplitude being dependent of the temperature of the system σ = 2 γ k B T {\textstyle \sigma ={\sqrt {2\gamma k_{\text{B}}T}}} . Rewriting the Langevin equation, d x d t = − D β c + 2 D ξ ( t ) {\displaystyle {\frac {dx}{dt}}=-D\beta c+{\sqrt {2D}}\xi (t)} where D = k B T γ {\textstyle D={\frac {k_{\text{B}}T}{\gamma }}} is the Einstein relation. The integration of this equation was done using the Euler–Maruyama method to numerically approximate the path of this Brownian particle. == Solution == Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. Furthermore, in the case of overdamped dynamics when the Fokker–Planck equation contains second partial derivatives with respect to all spatial variables, the equation can be written in the form of a master equation that can easily be solved numerically. In many applications, one is only interested in the steady-state probability distribution p 0 ( x ) {\displaystyle p_{0}(x)} , which can be found from ∂ p ( x , t ) ∂ t = 0 {\textstyle {\frac {\partial p(x,t)}{\partial t}}=0} . The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation. == Particular cases with known solution and inversion == In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility σ ( X t , t ) {\displaystyle {\sigma }(\mathbf {X} _{t},t)} consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008), and Musiela and Rutkowski (2008). == Fokker–Planck equation and path integral == Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods. This is used, for instance, in critical dynamics. A derivation of the path integral is possible in a similar way as in quantum mechanics. The derivation for a Fokker–Planck equation with one variable x {\displaystyle x} is as follows. Start by inserting a delta function and then integrating by parts: ∂ ∂ t p ( x ′ , t ) = − ∂ ∂ x ′ [ D 1 ( x ′ , t ) p ( x ′ , t ) ] + ∂ 2 ∂ x ′ 2 [ D 2 ( x ′ , t ) p ( x ′ , t ) ] = ∫ − ∞ ∞ d x [ ( D 1 ( x , t ) ∂ ∂ x + D 2 ( x , t ) ∂ 2 ∂ x 2 ) δ ( x ′ − x ) ] p ( x , t ) . {\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}p{\left(x',t\right)}&=-{\frac {\partial }{\partial x'}}\left[D_{1}(x',t)p(x',t)\right]+{\frac {\partial ^{2}}{\partial {x'}^{2}}}\left[D_{2}(x',t)p(x',t)\right]\\[1ex]&=\int _{-\infty }^{\infty }dx\left[\left(D_{1}{\left(x,t\right)}{\frac {\partial }{\partial x}}+D_{2}{\left(x,t\right)}{\frac {\partial ^{2}}{\partial x^{2}}}\right)\delta {\left(x'-x\right)}\right]p(x,t).\end{aligned}}} The x {\displaystyle x} -derivatives here only act on the δ {\displaystyle \delta } -function, not on p ( x , t ) {\displaystyle p(x,t)} . Integrate over a time interval ε {\displaystyle \varepsilon } , p ( x ′ , t + ε ) = ∫ − ∞ ∞ d x ( ( 1 + ε [ D 1 ( x , t ) ∂ ∂ x + D 2 ( x , t ) ∂ 2 ∂ x 2 ] ) δ ( x ′ − x ) ) p ( x , t ) + O ( ε 2 ) . {\displaystyle p(x',t+\varepsilon )=\int _{-\infty }^{\infty }\,\mathrm {d} x\left(\left(1+\varepsilon \left[D_{1}(x,t){\frac {\partial }{\partial x}}+D_{2}(x,t){\frac {\partial ^{2}}{\partial x^{2}}}\right]\right)\delta (x'-x)\right)p(x,t)+O(\varepsilon ^{2}).} Insert the Fourier integral δ ( x ′ − x ) = ∫ − i ∞ i ∞ d x ~ 2 π i e x ~ ( x − x ′ ) {\displaystyle \delta {\left(x'-x\right)}=\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}e^{{\tilde {x}}{\left(x-x'\right)}}} for the δ {\displaystyle \delta } -function, p ( x ′ , t + ε ) = ∫ − ∞ ∞ d x ∫ − i ∞ i ∞ d x ~ 2 π i ( 1 + ε [ x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) ] ) e x ~ ( x − x ′ ) p ( x , t ) + O ( ε 2 ) = ∫ − ∞ ∞ d x ∫ − i ∞ i ∞ d x ~ 2 π i exp ⁡ ( ε [ − x ~ ( x ′ − x ) ε + x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) ] ) p ( x , t ) + O ( ε 2 ) . {\displaystyle {\begin{aligned}p(x',t+\varepsilon )&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\left(1+\varepsilon \left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)e^{{\tilde {x}}(x-x')}p(x,t)+O(\varepsilon ^{2})\\[5pt]&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\exp \left(\varepsilon \left[-{\tilde {x}}{\frac {(x'-x)}{\varepsilon }}+{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)p(x,t)+O(\varepsilon ^{2}).\end{aligned}}} This equation expresses p ( x ′ , t + ε ) {\displaystyle p(x',t+\varepsilon )} as functional of p ( x , t ) {\displaystyle p(x,t)} . Iterating ( t ′ − t ) / ε {\displaystyle (t'-t)/\varepsilon } times and performing the limit ε → 0 {\displaystyle \varepsilon \rightarrow 0} gives a path integral with action S = ∫ d t [ x ~ D 1 ( x , t ) + x ~ 2 D 2 ( x , t ) − x ~ ∂ x ∂ t ] . {\displaystyle S=\int \mathrm {d} t\left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)-{\tilde {x}}{\frac {\partial x}{\partial t}}\right].} The variables x ~ {\displaystyle {\tilde {x}}} conjugate to x {\displaystyle x} are called "response variables". Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation. == See also == == Notes and references == == Further reading == Frank, Till Daniel (2005). Nonlinear Fokker–Planck Equations: Fundamentals and Applications. Springer Series in Synergetics. Springer. ISBN 3-540-21264-7. Gardiner, Crispin (2009). Stochastic Methods (4th ed.). Springer. ISBN 978-3-540-70712-7. Pavliotis, Grigorios A. (2014). Stochastic Processes and Applications: Diffusion Processes, the Fokker–Planck and Langevin Equations. Springer Texts in Applied Mathematics. Springer. ISBN 978-1-4939-1322-0. Risken, Hannes (1996). The Fokker–Planck Equation: Methods of Solutions and Applications. Springer Series in Synergetics (2nd ed.). Springer. ISBN 3-540-61530-X.
Wikipedia/Smoluchowski_equation
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation) which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable. Unlike the moving-average (MA) model, the autoregressive model is not always stationary, because it may contain a unit root. Large language models are called autoregressive, but they are not a classical autoregressive model in this sense because they are not linear. == Definition == The notation A R ( p ) {\displaystyle AR(p)} indicates an autoregressive model of order p. The AR(p) model is defined as X t = ∑ i = 1 p φ i X t − i + ε t {\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}} where φ 1 , … , φ p {\displaystyle \varphi _{1},\ldots ,\varphi _{p}} are the parameters of the model, and ε t {\displaystyle \varepsilon _{t}} is white noise. This can be equivalently written using the backshift operator B as X t = ∑ i = 1 p φ i B i X t + ε t {\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}B^{i}X_{t}+\varepsilon _{t}} so that, moving the summation term to the left side and using polynomial notation, we have ϕ [ B ] X t = ε t {\displaystyle \phi [B]X_{t}=\varepsilon _{t}} An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise. Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with | φ 1 | ≥ 1 {\displaystyle |\varphi _{1}|\geq 1} are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial Φ ( z ) := 1 − ∑ i = 1 p φ i z i {\displaystyle \Phi (z):=\textstyle 1-\sum _{i=1}^{p}\varphi _{i}z^{i}} must lie outside the unit circle, i.e., each (complex) root z i {\displaystyle z_{i}} must satisfy | z i | > 1 {\displaystyle |z_{i}|>1} (see pages 89,92 ). == Intertemporal effect of shocks == In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model X t = φ 1 X t − 1 + ε t {\displaystyle X_{t}=\varphi _{1}X_{t-1}+\varepsilon _{t}} . A non-zero value for ε t {\displaystyle \varepsilon _{t}} at say time t=1 affects X 1 {\displaystyle X_{1}} by the amount ε 1 {\displaystyle \varepsilon _{1}} . Then by the AR equation for X 2 {\displaystyle X_{2}} in terms of X 1 {\displaystyle X_{1}} , this affects X 2 {\displaystyle X_{2}} by the amount φ 1 ε 1 {\displaystyle \varphi _{1}\varepsilon _{1}} . Then by the AR equation for X 3 {\displaystyle X_{3}} in terms of X 2 {\displaystyle X_{2}} , this affects X 3 {\displaystyle X_{3}} by the amount φ 1 2 ε 1 {\displaystyle \varphi _{1}^{2}\varepsilon _{1}} . Continuing this process shows that the effect of ε 1 {\displaystyle \varepsilon _{1}} never ends, although if the process is stationary then the effect diminishes toward zero in the limit. Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression ϕ ( B ) X t = ε t {\displaystyle \phi (B)X_{t}=\varepsilon _{t}\,} (where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as X t = 1 ϕ ( B ) ε t . {\displaystyle X_{t}={\frac {1}{\phi (B)}}\varepsilon _{t}\,.} When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to ε t {\displaystyle \varepsilon _{t}} has an infinite order—that is, an infinite number of lagged values of ε t {\displaystyle \varepsilon _{t}} appear on the right side of the equation. == Characteristic polynomial == The autocorrelation function of an AR(p) process can be expressed as ρ ( τ ) = ∑ k = 1 p a k y k − | τ | , {\displaystyle \rho (\tau )=\sum _{k=1}^{p}a_{k}y_{k}^{-|\tau |},} where y k {\displaystyle y_{k}} are the roots of the polynomial ϕ ( B ) = 1 − ∑ k = 1 p φ k B k {\displaystyle \phi (B)=1-\sum _{k=1}^{p}\varphi _{k}B^{k}} where B is the backshift operator, where ϕ ( ⋅ ) {\displaystyle \phi (\cdot )} is the function defining the autoregression, and where φ k {\displaystyle \varphi _{k}} are the coefficients in the autoregression. The formula is valid only if all the roots have multiplicity 1. The autocorrelation function of an AR(p) process is a sum of decaying exponentials. Each real root contributes a component to the autocorrelation function that decays exponentially. Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation. == Graphs of AR(p) processes == The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise. For an AR(1) process with a positive φ {\displaystyle \varphi } , only the previous term in the process and the noise term contribute to the output. If φ {\displaystyle \varphi } is close to 0, then the process still looks like white noise, but as φ {\displaystyle \varphi } approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter. For an AR(2) process, the previous two terms and the noise term contribute to the output. If both φ 1 {\displaystyle \varphi _{1}} and φ 2 {\displaystyle \varphi _{2}} are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If φ 1 {\displaystyle \varphi _{1}} is positive while φ 2 {\displaystyle \varphi _{2}} is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be linked to edge detection or detection of change in direction. == Example: An AR(1) process == An AR(1) process is given by: X t = φ X t − 1 + ε t {\displaystyle X_{t}=\varphi X_{t-1}+\varepsilon _{t}\,} where ε t {\displaystyle \varepsilon _{t}} is a white noise process with zero mean and constant variance σ ε 2 {\displaystyle \sigma _{\varepsilon }^{2}} . (Note: The subscript on φ 1 {\displaystyle \varphi _{1}} has been dropped.) The process is weak-sense stationary if | φ | < 1 {\displaystyle |\varphi |<1} since it is obtained as the output of a stable filter whose input is white noise. (If φ = 1 {\displaystyle \varphi =1} then the variance of X t {\displaystyle X_{t}} depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not weak sense stationary.) Assuming | φ | < 1 {\displaystyle |\varphi |<1} , the mean E ⁡ ( X t ) {\displaystyle \operatorname {E} (X_{t})} is identical for all values of t by the very definition of weak sense stationarity. If the mean is denoted by μ {\displaystyle \mu } , it follows from E ⁡ ( X t ) = φ E ⁡ ( X t − 1 ) + E ⁡ ( ε t ) , {\displaystyle \operatorname {E} (X_{t})=\varphi \operatorname {E} (X_{t-1})+\operatorname {E} (\varepsilon _{t}),} that μ = φ μ + 0 , {\displaystyle \mu =\varphi \mu +0,} and hence μ = 0. {\displaystyle \mu =0.} The variance is var ( X t ) = E ⁡ ( X t 2 ) − μ 2 = σ ε 2 1 − φ 2 , {\displaystyle {\textrm {var}}(X_{t})=\operatorname {E} (X_{t}^{2})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}},} where σ ε {\displaystyle \sigma _{\varepsilon }} is the standard deviation of ε t {\displaystyle \varepsilon _{t}} . This can be shown by noting that var ( X t ) = φ 2 var ( X t − 1 ) + σ ε 2 , {\displaystyle {\textrm {var}}(X_{t})=\varphi ^{2}{\textrm {var}}(X_{t-1})+\sigma _{\varepsilon }^{2},} and then by noticing that the quantity above is a stable fixed point of this relation. The autocovariance is given by B n = E ⁡ ( X t + n X t ) − μ 2 = σ ε 2 1 − φ 2 φ | n | . {\displaystyle B_{n}=\operatorname {E} (X_{t+n}X_{t})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|n|}.} It can be seen that the autocovariance function decays with a decay time (also called time constant) of τ = 1 − φ {\displaystyle \tau =1-\varphi } . The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform: Φ ( ω ) = 1 2 π ∑ n = − ∞ ∞ B n e − i ω n = 1 2 π ( σ ε 2 1 + φ 2 − 2 φ cos ⁡ ( ω ) ) . {\displaystyle \Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,\sum _{n=-\infty }^{\infty }B_{n}e^{-i\omega n}={\frac {1}{\sqrt {2\pi }}}\,\left({\frac {\sigma _{\varepsilon }^{2}}{1+\varphi ^{2}-2\varphi \cos(\omega )}}\right).} This expression is periodic due to the discrete nature of the X j {\displaystyle X_{j}} , which is manifested as the cosine term in the denominator. If we assume that the sampling time ( Δ t = 1 {\displaystyle \Delta t=1} ) is much smaller than the decay time ( τ {\displaystyle \tau } ), then we can use a continuum approximation to B n {\displaystyle B_{n}} : B ( t ) ≈ σ ε 2 1 − φ 2 φ | t | {\displaystyle B(t)\approx {\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|t|}} which yields a Lorentzian profile for the spectral density: Φ ( ω ) = 1 2 π σ ε 2 1 − φ 2 γ π ( γ 2 + ω 2 ) {\displaystyle \Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,{\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,{\frac {\gamma }{\pi (\gamma ^{2}+\omega ^{2})}}} where γ = 1 / τ {\displaystyle \gamma =1/\tau } is the angular frequency associated with the decay time τ {\displaystyle \tau } . An alternative expression for X t {\displaystyle X_{t}} can be derived by first substituting φ X t − 2 + ε t − 1 {\displaystyle \varphi X_{t-2}+\varepsilon _{t-1}} for X t − 1 {\displaystyle X_{t-1}} in the defining equation. Continuing this process N times yields X t = φ N X t − N + ∑ k = 0 N − 1 φ k ε t − k . {\displaystyle X_{t}=\varphi ^{N}X_{t-N}+\sum _{k=0}^{N-1}\varphi ^{k}\varepsilon _{t-k}.} For N approaching infinity, φ N {\displaystyle \varphi ^{N}} will approach zero and: X t = ∑ k = 0 ∞ φ k ε t − k . {\displaystyle X_{t}=\sum _{k=0}^{\infty }\varphi ^{k}\varepsilon _{t-k}.} It is seen that X t {\displaystyle X_{t}} is white noise convolved with the φ k {\displaystyle \varphi ^{k}} kernel plus the constant mean. If the white noise ε t {\displaystyle \varepsilon _{t}} is a Gaussian process then X t {\displaystyle X_{t}} is also a Gaussian process. In other cases, the central limit theorem indicates that X t {\displaystyle X_{t}} will be approximately normally distributed when φ {\displaystyle \varphi } is close to one. For ε t = 0 {\displaystyle \varepsilon _{t}=0} , the process X t = φ X t − 1 {\displaystyle X_{t}=\varphi X_{t-1}} will be a geometric progression (exponential growth or decay). In this case, the solution can be found analytically: X t = a φ t {\displaystyle X_{t}=a\varphi ^{t}} whereby a {\displaystyle a} is an unknown constant (initial condition). === Explicit mean/difference form of AR(1) process === The AR(1) model is the discrete-time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model, with process parameter θ ∈ R {\displaystyle \theta \in \mathbb {R} } , is given by X t + 1 = X t + ( 1 − θ ) ( μ − X t ) + ε t + 1 {\displaystyle X_{t+1}=X_{t}+(1-\theta )(\mu -X_{t})+\varepsilon _{t+1}} , where | θ | < 1 {\displaystyle |\theta |<1\,} , μ := E ( X ) {\displaystyle \mu :=E(X)} is the model mean, and { ϵ t } {\displaystyle \{\epsilon _{t}\}} is a white-noise process with zero mean and constant variance σ {\displaystyle \sigma } . By rewriting this as X t + 1 = θ X t + ( 1 − θ ) μ + ε t + 1 {\displaystyle X_{t+1}=\theta X_{t}+(1-\theta )\mu +\varepsilon _{t+1}} and then deriving (by induction) X t + n = θ n X t + ( 1 − θ n ) μ + Σ i = 1 n ( θ n − i ϵ t + i ) {\displaystyle X_{t+n}=\theta ^{n}X_{t}+(1-\theta ^{n})\mu +\Sigma _{i=1}^{n}\left(\theta ^{n-i}\epsilon _{t+i}\right)} , one can show that E ⁡ ( X t + n | X t ) = μ [ 1 − θ n ] + X t θ n {\displaystyle \operatorname {E} (X_{t+n}|X_{t})=\mu \left[1-\theta ^{n}\right]+X_{t}\theta ^{n}} and Var ⁡ ( X t + n | X t ) = σ 2 1 − θ 2 n 1 − θ 2 {\displaystyle \operatorname {Var} (X_{t+n}|X_{t})=\sigma ^{2}{\frac {1-\theta ^{2n}}{1-\theta ^{2}}}} . == Choosing the maximum lag == The partial autocorrelation of an AR(p) process equals zero at lags larger than p, so the appropriate maximum lag p is the one after which the partial autocorrelations are all zero. == Calculation of the AR parameters == There are many ways to estimate the coefficients, such as the ordinary least squares procedure or method of moments (through Yule–Walker equations). The AR(p) model is given by the equation X t = ∑ i = 1 p φ i X t − i + ε t . {\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}.\,} It is based on parameters φ i {\displaystyle \varphi _{i}} where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations. === Yule–Walker equations === The Yule–Walker equations, named for Udny Yule and Gilbert Walker, are the following set of equations. γ m = ∑ k = 1 p φ k γ m − k + σ ε 2 δ m , 0 , {\displaystyle \gamma _{m}=\sum _{k=1}^{p}\varphi _{k}\gamma _{m-k}+\sigma _{\varepsilon }^{2}\delta _{m,0},} where m = 0, …, p, yielding p + 1 equations. Here γ m {\displaystyle \gamma _{m}} is the autocovariance function of Xt, σ ε {\displaystyle \sigma _{\varepsilon }} is the standard deviation of the input noise process, and δ m , 0 {\displaystyle \delta _{m,0}} is the Kronecker delta function. Because the last part of an individual equation is non-zero only if m = 0, the set of equations can be solved by representing the equations for m > 0 in matrix form, thus getting the equation [ γ 1 γ 2 γ 3 ⋮ γ p ] = [ γ 0 γ − 1 γ − 2 ⋯ γ 1 γ 0 γ − 1 ⋯ γ 2 γ 1 γ 0 ⋯ ⋮ ⋮ ⋮ ⋱ γ p − 1 γ p − 2 γ p − 3 ⋯ ] [ φ 1 φ 2 φ 3 ⋮ φ p ] {\displaystyle {\begin{bmatrix}\gamma _{1}\\\gamma _{2}\\\gamma _{3}\\\vdots \\\gamma _{p}\\\end{bmatrix}}={\begin{bmatrix}\gamma _{0}&\gamma _{-1}&\gamma _{-2}&\cdots \\\gamma _{1}&\gamma _{0}&\gamma _{-1}&\cdots \\\gamma _{2}&\gamma _{1}&\gamma _{0}&\cdots \\\vdots &\vdots &\vdots &\ddots \\\gamma _{p-1}&\gamma _{p-2}&\gamma _{p-3}&\cdots \\\end{bmatrix}}{\begin{bmatrix}\varphi _{1}\\\varphi _{2}\\\varphi _{3}\\\vdots \\\varphi _{p}\\\end{bmatrix}}} which can be solved for all { φ m ; m = 1 , 2 , … , p } . {\displaystyle \{\varphi _{m};m=1,2,\dots ,p\}.} The remaining equation for m = 0 is γ 0 = ∑ k = 1 p φ k γ − k + σ ε 2 , {\displaystyle \gamma _{0}=\sum _{k=1}^{p}\varphi _{k}\gamma _{-k}+\sigma _{\varepsilon }^{2},} which, once { φ m ; m = 1 , 2 , … , p } {\displaystyle \{\varphi _{m};m=1,2,\dots ,p\}} are known, can be solved for σ ε 2 . {\displaystyle \sigma _{\varepsilon }^{2}.} An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first p+1 elements ρ ( τ ) {\displaystyle \rho (\tau )} of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating ρ ( τ ) = ∑ k = 1 p φ k ρ ( k − τ ) {\displaystyle \rho (\tau )=\sum _{k=1}^{p}\varphi _{k}\rho (k-\tau )} Examples for some Low-order AR(p) processes p=1 γ 1 = φ 1 γ 0 {\displaystyle \gamma _{1}=\varphi _{1}\gamma _{0}} Hence ρ 1 = γ 1 / γ 0 = φ 1 {\displaystyle \rho _{1}=\gamma _{1}/\gamma _{0}=\varphi _{1}} p=2 The Yule–Walker equations for an AR(2) process are γ 1 = φ 1 γ 0 + φ 2 γ − 1 {\displaystyle \gamma _{1}=\varphi _{1}\gamma _{0}+\varphi _{2}\gamma _{-1}} γ 2 = φ 1 γ 1 + φ 2 γ 0 {\displaystyle \gamma _{2}=\varphi _{1}\gamma _{1}+\varphi _{2}\gamma _{0}} Remember that γ − k = γ k {\displaystyle \gamma _{-k}=\gamma _{k}} Using the first equation yields ρ 1 = γ 1 / γ 0 = φ 1 1 − φ 2 {\displaystyle \rho _{1}=\gamma _{1}/\gamma _{0}={\frac {\varphi _{1}}{1-\varphi _{2}}}} Using the recursion formula yields ρ 2 = γ 2 / γ 0 = φ 1 2 − φ 2 2 + φ 2 1 − φ 2 {\displaystyle \rho _{2}=\gamma _{2}/\gamma _{0}={\frac {\varphi _{1}^{2}-\varphi _{2}^{2}+\varphi _{2}}{1-\varphi _{2}}}} === Estimation of AR parameters === The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values. Some of these variants can be described as follows: Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices. Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of Xt on the p previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate. Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model: X t = ∑ i = 1 p φ i X t + i + ε t ∗ . {\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t+i}+\varepsilon _{t}^{*}\,.} Here predicted values of Xt would be based on the p future values of the same series. This way of estimating the AR parameters is due to John Parker Burg, and is called the Burg method: Burg and later authors called these particular estimates "maximum entropy estimates", but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation. Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial p values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity. == Spectrum == The power spectral density (PSD) of an AR(p) process with noise variance V a r ( Z t ) = σ Z 2 {\displaystyle \mathrm {Var} (Z_{t})=\sigma _{Z}^{2}} is S ( f ) = σ Z 2 | 1 − ∑ k = 1 p φ k e − i 2 π f k | 2 . {\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{|1-\sum _{k=1}^{p}\varphi _{k}e^{-i2\pi fk}|^{2}}}.} === AR(0) === For white noise (AR(0)) S ( f ) = σ Z 2 . {\displaystyle S(f)=\sigma _{Z}^{2}.} === AR(1) === For AR(1) S ( f ) = σ Z 2 | 1 − φ 1 e − 2 π i f | 2 = σ Z 2 1 + φ 1 2 − 2 φ 1 cos ⁡ 2 π f {\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{|1-\varphi _{1}e^{-2\pi if}|^{2}}}={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}-2\varphi _{1}\cos 2\pi f}}} If φ 1 > 0 {\displaystyle \varphi _{1}>0} there is a single spectral peak at f = 0 {\displaystyle f=0} , often referred to as red noise. As φ 1 {\displaystyle \varphi _{1}} becomes nearer 1, there is stronger power at low frequencies, i.e. larger time lags. This is then a low-pass filter, when applied to full spectrum light, everything except for the red light will be filtered. If φ 1 < 0 {\displaystyle \varphi _{1}<0} there is a minimum at f = 0 {\displaystyle f=0} , often referred to as blue noise. This similarly acts as a high-pass filter, everything except for blue light will be filtered. === AR(2) === The behavior of an AR(2) process is determined entirely by the roots of it characteristic equation, which is expressed in terms of the lag operator as: 1 − φ 1 B − φ 2 B 2 = 0 , {\displaystyle 1-\varphi _{1}B-\varphi _{2}B^{2}=0,} or equivalently by the poles of its transfer function, which is defined in the Z domain by: H z = ( 1 − φ 1 z − 1 − φ 2 z − 2 ) − 1 . {\displaystyle H_{z}=(1-\varphi _{1}z^{-1}-\varphi _{2}z^{-2})^{-1}.} It follows that the poles are values of z satisfying: 1 − φ 1 z − 1 − φ 2 z − 2 = 0 {\displaystyle 1-\varphi _{1}z^{-1}-\varphi _{2}z^{-2}=0} , which yields: z 1 , z 2 = 1 2 φ 2 ( φ 1 ± φ 1 2 + 4 φ 2 ) {\displaystyle z_{1},z_{2}={\frac {1}{2\varphi _{2}}}\left(\varphi _{1}\pm {\sqrt {\varphi _{1}^{2}+4\varphi _{2}}}\right)} . z 1 {\displaystyle z_{1}} and z 2 {\displaystyle z_{2}} are the reciprocals of the characteristic roots, as well as the eigenvalues of the temporal update matrix: [ φ 1 φ 2 1 0 ] {\displaystyle {\begin{bmatrix}\varphi _{1}&\varphi _{2}\\1&0\end{bmatrix}}} AR(2) processes can be split into three groups depending on the characteristics of their roots/poles: When φ 1 2 + 4 φ 2 < 0 {\displaystyle \varphi _{1}^{2}+4\varphi _{2}<0} , the process has a pair of complex-conjugate poles, creating a mid-frequency peak at: f ∗ = 1 2 π cos − 1 ⁡ ( φ 1 2 − φ 2 ) , {\displaystyle f^{*}={\frac {1}{2\pi }}\cos ^{-1}\left({\frac {\varphi _{1}}{2{\sqrt {-\varphi _{2}}}}}\right),} with bandwidth about the peak inversely proportional to the moduli of the poles: | z 1 | = | z 2 | = − φ 2 . {\displaystyle |z_{1}|=|z_{2}|={\sqrt {-\varphi _{2}}}.} The terms involving square roots are all real in the case of complex poles since they exist only when φ 2 < 0 {\displaystyle \varphi _{2}<0} . Otherwise the process has real roots, and: When φ 1 > 0 {\displaystyle \varphi _{1}>0} it acts as a low-pass filter on the white noise with a spectral peak at f = 0 {\displaystyle f=0} When φ 1 < 0 {\displaystyle \varphi _{1}<0} it acts as a high-pass filter on the white noise with a spectral peak at f = 1 / 2 {\displaystyle f=1/2} . The process is non-stationary when the poles are on or outside the unit circle, or equivalently when the characteristic roots are on or inside the unit circle. The process is stable when the poles are strictly within the unit circle (roots strictly outside the unit circle), or equivalently when the coefficients are in the triangle − 1 ≤ φ 2 ≤ 1 − | φ 1 | {\displaystyle -1\leq \varphi _{2}\leq 1-|\varphi _{1}|} . The full PSD function can be expressed in real form as: S ( f ) = σ Z 2 1 + φ 1 2 + φ 2 2 − 2 φ 1 ( 1 − φ 2 ) cos ⁡ ( 2 π f ) − 2 φ 2 cos ⁡ ( 4 π f ) {\displaystyle S(f)={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}+\varphi _{2}^{2}-2\varphi _{1}(1-\varphi _{2})\cos(2\pi f)-2\varphi _{2}\cos(4\pi f)}}} == Implementations in statistics packages == R – the stats package includes ar function; the astsa package includes sarima function to fit various models including AR. MATLAB – the Econometrics Toolbox and System Identification Toolbox include AR models. MATLAB and Octave – the TSA toolbox contains several estimation functions for uni-variate, multivariate, and adaptive AR models. PyMC3 – the Bayesian statistics and probabilistic programming framework supports AR modes with p lags. bayesloop – supports parameter inference and model selection for the AR-1 process with time-varying parameters. Python – statsmodels.org hosts an AR model. == Impulse response == The impulse response of a system is the change in an evolving variable in response to a change in the value of a shock term k periods earlier, as a function of k. Since the AR model is a special case of the vector autoregressive model, the computation of the impulse response in vector autoregression#impulse response applies here. == n-step-ahead forecasting == Once the parameters of the autoregression X t = ∑ i = 1 p φ i X t − i + ε t {\displaystyle X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}\,} have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use t to refer to the first period for which data is not yet available; substitute the known preceding values Xt-i for i=1, ..., p into the autoregressive equation while setting the error term ε t {\displaystyle \varepsilon _{t}} equal to zero (because we forecast Xt to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use t to refer to the next period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of X one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from preceding steps. There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term ε t {\displaystyle \varepsilon _{t}\,} for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the n-step-ahead predictions; the confidence interval will become wider as n increases because of the use of an increasing number of estimated values for the right-side variables. == See also == Moving average model Linear difference equation Predictive analytics Linear predictive coding Resonance Levinson recursion Ornstein–Uhlenbeck process Infinite impulse response == Notes == == References == Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 9780521343398. Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. Bibcode:1993sapa.book.....P. Pandit, Sudhakar M.; Wu, Shien-Ming (1983). Time Series and System Analysis with Applications. John Wiley & Sons. == External links == AutoRegression Analysis (AR) by Paul Bourke Econometrics lecture (topic: Autoregressive models) on YouTube by Mark Thoma
Wikipedia/Stochastic_difference_equation
The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region – such as a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string theoretic interpretation by Leonard Susskind, who combined his ideas with previous ones of 't Hooft and Charles Thorn. Susskind said, "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." As pointed out by Raphael Bousso, Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence. The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics, which conjectures that the maximum entropy in any region scales with the radius squared, rather than cubed as might be expected. In the case of a black hole, the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory. However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law (radius squared), hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood. == High-level summary == The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler, which suggests scientists may "regard the physical world as made of information, with energy and matter as incidentals". Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand', or is that idea no more than 'poetic license'?", referring to the holographic principle. === Unexpected connection === Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy. In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877, Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in, while still "looking" like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room and all the ways they could be moving. === Energy, matter, and information equivalence === Shannon's efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann's. In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement" of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information. The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary. == The AdS/CFT correspondence == The anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality (after ref.) or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles. The duality represents a major advance in understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle. It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics. == Black hole entropy == An object with relatively high entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein's equations, they were thought not to have any entropy. But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the event horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects with an entropy that increases by an amount greater than the entropy of the consumed gas. Given a fixed volume, a black hole whose event horizon encompasses that volume should be the object with the highest amount of entropy. Otherwise, imagine something with a larger entropy, then by throwing more mass into that something, we obtain a black hole with less entropy, violating the second law. In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational; when there is too much energy, the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. Gravitational time dilation causes time, from the perspective of a remote observer, to stop at the event horizon. Due to the natural limit on maximum speed of motion, this prevents falling objects from crossing the event horizon no matter how close they get to it. Since any change in quantum state requires time to flow, all objects and their quantum information state stay imprinted on the event horizon. Bekenstein concluded that from the perspective of any remote observer, the black hole entropy is directly proportional to the area of the event horizon. Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase. At first, Hawking did not take the analogy too seriously. He argued that the black hole must have zero temperature, since black holes do not radiate and therefore cannot be in thermal equilibrium with any black body of positive temperature. Then he discovered that black holes do radiate. When heat is added to a thermal system, the change in entropy is the increase in mass–energy divided by temperature: d S = δ M c 2 T . {\displaystyle {\rm {d}}S={\frac {{\rm {\delta }}M\ c^{2}}{T}}.} (Here the term δM c2 is substituted for the thermal energy added to the system, generally by non-integrable random processes, in contrast to dS, which is a function of a few "state variables" only, i.e. in conventional thermodynamics only of the Kelvin temperature T and a few additional state variables, such as the pressure.) If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance. Time-independent solutions to field equations do not emit radiation, because a time-independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units. The entropy is proportional to the logarithm of the number of microstates, the enumerated ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling – it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior. Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets. == Black hole information paradox == Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy interact only when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering. Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified because, in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities. Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternative description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory. This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes. This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant. The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory. In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The matrix theory they proposed was first suggested as a description of two branes in eleven-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories. == Limit on information density == Information content is defined as the logarithm of the reciprocal of the probability that a system is in a specific microstate, and the information entropy of a system is the expected value of the system's information content. This definition of entropy is equivalent to the standard Gibbs entropy used in classical physics. Applying this definition to a physical system leads to the conclusion that, for a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume. In particular, a given volume has an upper limit of information it can contain, at which it will collapse into a black hole. This suggests that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level. The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J. David Brown and Marc Henneaux had rigorously proved in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2-dimensional conformal field theory. == Experimental tests == The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600. However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations. Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency, shows that Craig Hogan's noise is absent down to a scale of 10−48 meters, as opposed to the scale of 10−35 meters predicted by Hogan, and the scale of 10−16 meters found in measurements of the GEO 600 instrument. Research continued at Fermilab under Hogan as of 2013. Jacob Bekenstein claimed to have found a way to test the holographic principle with a tabletop photon experiment. == See also == == Notes == == References == Citations Sources Bousso, Raphael (2002). "The holographic principle". Reviews of Modern Physics. 74 (3): 825–874. arXiv:hep-th/0203101. Bibcode:2002RvMP...74..825B. doi:10.1103/RevModPhys.74.825. S2CID 55096624. 't Hooft, Gerard (1993). "Dimensional Reduction in Quantum Gravity". arXiv:gr-qc/9310026.. 't Hooft's original paper. == External links == Alfonso V. Ramallo: Introduction to the AdS/CFT correspondence, arXiv:1310.4319 , pedagogical lecture. For the holographic principle: see especially Fig. 1. UC Berkeley's Raphael Bousso gives an introductory lecture on the holographic principle – Video. Scientific American article on holographic principle by Jacob Bekenstein O'Dowd, Matt (10 April 2019). "The Holographic Universe Explained". PBS Space Time. Archived from the original on 11 December 2021 – via YouTube.
Wikipedia/Holographic_principle
In general relativity, an exact solution is a (typically closed form) solution of the Einstein field equations whose derivation does not invoke simplifying approximations of the equations, though the starting point for that derivation may be an idealized case like a perfectly spherical shape of matter. Mathematically, finding an exact solution means finding a Lorentzian manifold equipped with tensor fields modeling states of ordinary matter, such as a fluid, or classical non-gravitational fields such as the electromagnetic field. == Background and definition == These tensor fields should obey any relevant physical laws (for example, any electromagnetic field must satisfy Maxwell's equations). Following a standard recipe which is widely used in mathematical physics, these tensor fields should also give rise to specific contributions to the stress–energy tensor T α β {\displaystyle T^{\alpha \beta }} . (A field is described by a Lagrangian, varying with respect to the field should give the field equations and varying with respect to the metric should give the stress-energy contribution due to the field.) Finally, when all the contributions to the stress–energy tensor are added up, the result must be a solution of the Einstein field equations G α β = κ T α β . {\displaystyle G^{\alpha \beta }=\kappa \,T^{\alpha \beta }.} In the above field equations, G α β {\displaystyle G^{\alpha \beta }} is the Einstein tensor, computed uniquely from the metric tensor which is part of the definition of a Lorentzian manifold. Since giving the Einstein tensor does not fully determine the Riemann tensor, but leaves the Weyl tensor unspecified (see the Ricci decomposition), the Einstein equation may be considered a kind of compatibility condition: the spacetime geometry must be consistent with the amount and motion of any matter or non-gravitational fields, in the sense that the immediate presence "here and now" of non-gravitational energy–momentum causes a proportional amount of Ricci curvature "here and now". Moreover, taking covariant derivatives of the field equations and applying the Bianchi identities, it is found that a suitably varying amount/motion of non-gravitational energy–momentum can cause ripples in curvature to propagate as gravitational radiation, even across vacuum regions, which contain no matter or non-gravitational fields. == Difficulties with the definition == Any Lorentzian manifold is a solution of the Einstein field equation for some right hand side. This is illustrated by the following procedure: take any Lorentzian manifold, compute its Einstein tensor G α β {\displaystyle G^{\alpha \beta }} , which is a purely mathematical operation divide by the Einstein gravitational constant κ {\displaystyle \kappa } declare the resulting symmetric second rank tensor field to be the stress–energy tensor T α β {\displaystyle T^{\alpha \beta }} . This shows that there are two complementary ways to use general relativity: One can fix the form of the stress–energy tensor (from some physical reasons, say) and study the solutions of the Einstein equations with such right hand side (for example, if the stress–energy tensor is chosen to be that of the perfect fluid, a spherically symmetric solution can serve as a stellar model) Alternatively, one can fix some geometrical properties of a spacetime and look for a matter source that could provide these properties. This is what cosmologists have done since the 2000s: they assume that the Universe is homogeneous, isotropic, and accelerating and try to realize what matter (called dark energy) can support such a structure. Within the first approach the alleged stress–energy tensor must arise in the standard way from a "reasonable" matter distribution or non-gravitational field. In practice, this notion is pretty clear, especially if we restrict the admissible non-gravitational fields to the only one known in 1916, the electromagnetic field. But ideally we would like to have some mathematical characterization that states some purely mathematical test which we can apply to any putative "stress–energy tensor", which passes everything which might arise from a "reasonable" physical scenario, and rejects everything else. No such characterization is known. Instead, we have crude tests known as the energy conditions, which are similar to placing restrictions on the eigenvalues and eigenvectors of a linear operator. On the one hand, these conditions are far too permissive: they would admit "solutions" which almost no-one believes are physically reasonable. On the other, they may be far too restrictive: the most popular energy conditions are apparently violated by the Casimir effect. Einstein also recognized another element of the definition of an exact solution: it should be a Lorentzian manifold (meeting additional criteria), i.e. a smooth manifold. But in working with general relativity, it turns out to be very useful to admit solutions which are not everywhere smooth; examples include many solutions created by matching a perfect fluid interior solution to a vacuum exterior solution, and impulsive plane waves. Once again, the creative tension between elegance and convenience, respectively, has proven difficult to resolve satisfactorily. In addition to such local objections, we have the far more challenging problem that there are very many exact solutions which are locally unobjectionable, but globally exhibit causally suspect features such as closed timelike curves or structures with points of separation ("trouser worlds"). Some of the best known exact solutions, in fact, have globally a strange character. == Types of exact solution == Many well-known exact solutions belong to one of several types, depending upon the intended physical interpretation of the stress–energy tensor: Vacuum solutions: T α β = 0 {\displaystyle T^{\alpha \beta }=0} ; these describe regions in which no matter or non-gravitational fields are present, Electrovacuum solutions: T α β {\displaystyle T^{\alpha \beta }} must arise entirely from an electromagnetic field which solves the source-free Maxwell equations on the given curved Lorentzian manifold; this means that the only source for the gravitational field is the field energy (and momentum) of the electromagnetic field, Null dust solutions: T α β {\displaystyle T^{\alpha \beta }} must correspond to a stress–energy tensor which can be interpreted as arising from incoherent electromagnetic radiation, without necessarily solving the Maxwell field equations on the given Lorentzian manifold, Fluid solutions: T α β {\displaystyle T^{\alpha \beta }} must arise entirely from the stress–energy tensor of a fluid (often taken to be a perfect fluid); the only source for the gravitational field is the energy, momentum, and stress (pressure and shear stress) of the matter comprising the fluid. In addition to such well established phenomena as fluids or electromagnetic waves, one can contemplate models in which the gravitational field is produced entirely by the field energy of various exotic hypothetical fields: Scalar field solutions: T α β {\displaystyle T^{\alpha \beta }} must arise entirely from a scalar field (often a massless scalar field); these can arise in classical field theory treatments of meson beams, or as quintessence, Lambdavacuum solutions (not a standard term, but a standard concept for which no name yet exists): T α β {\displaystyle T^{\alpha \beta }} arises entirely from a nonzero cosmological constant. One possibility which has received little attention (perhaps because the mathematics is so challenging) is the problem of modeling an elastic solid. Presently, it seems that no exact solutions for this specific type are known. Below we have sketched a classification by physical interpretation. Solutions can also be organized using the Segre classification of the possible algebraic symmetries of the Ricci tensor: non-null electrovacuums have Segre type { ( 1 , 1 ) ( 11 ) } {\displaystyle \{\,(1,1)(11)\}} and isotropy group SO(1,1) x SO(2), null electrovacuums and null dusts have Segre type { ( 2 , 11 ) } {\displaystyle \{\,(2,11)\}} and isotropy group E(2), perfect fluids have Segre type { 1 , ( 111 ) } {\displaystyle \{\,1,(111)\}} and isotropy group SO(3), Lambda vacuums have Segre type { ( 1 , 111 ) } {\displaystyle \{\,(1,111)\}} and isotropy group SO(1,3). The remaining Segre types have no particular physical interpretation and most of them cannot correspond to any known type of contribution to the stress–energy tensor. === Examples === Noteworthy examples of vacuum solutions, electrovacuum solutions, and so forth, are listed in specialized articles (see below). These solutions contain at most one contribution to the energy–momentum tensor, due to a specific kind of matter or field. However, there are some notable exact solutions which contain two or three contributions, including: NUT-Kerr–Newman–de Sitter solution contains contributions from an electromagnetic field and a positive vacuum energy, as well as a kind of vacuum perturbation of the Kerr vacuum which is specified by the so-called NUT parameter, Gödel dust contains contributions from a pressureless perfect fluid (dust) and from a positive vacuum energy. == Constructing solutions == The Einstein field equations are a system of coupled, nonlinear partial differential equations. In general, this makes them hard to solve. Nonetheless, several effective techniques for obtaining exact solutions have been established. The simplest involves imposing symmetry conditions on the metric tensor, such as stationarity (symmetry under time translation) or axisymmetry (symmetry under rotation about some symmetry axis). With sufficiently clever assumptions of this sort, it is often possible to reduce the Einstein field equation to a much simpler system of equations, even a single partial differential equation (as happens in the case of stationary axisymmetric vacuum solutions, which are characterized by the Ernst equation) or a system of ordinary differential equations (as happens in the case of the Schwarzschild vacuum). This naive approach usually works best if one uses a frame field rather than a coordinate basis. A related idea involves imposing algebraic symmetry conditions on the Weyl tensor, Ricci tensor, or Riemann tensor. These are often stated in terms of the Petrov classification of the possible symmetries of the Weyl tensor, or the Segre classification of the possible symmetries of the Ricci tensor. As will be apparent from the discussion above, such Ansätze often do have some physical content, although this might not be apparent from their mathematical form. This second kind of symmetry approach has often been used with the Newman–Penrose formalism, which uses spinorial quantities for more efficient bookkeeping. Even after such symmetry reductions, the reduced system of equations is often difficult to solve. For example, the Ernst equation is a nonlinear partial differential equation somewhat resembling the nonlinear Schrödinger equation (NLS). But recall that the conformal group on Minkowski spacetime is the symmetry group of the Maxwell equations. Recall too that solutions of the heat equation can be found by assuming a scaling Ansatz. These notions are merely special cases of Sophus Lie's notion of the point symmetry of a differential equation (or system of equations), and as Lie showed, this can provide an avenue of attack upon any differential equation which has a nontrivial symmetry group. Indeed, both the Ernst equation and the NLS have nontrivial symmetry groups, and some solutions can be found by taking advantage of their symmetries. These symmetry groups are often infinite dimensional, but this is not always a useful feature. Emmy Noether showed that a slight but profound generalization of Lie's notion of symmetry can result in an even more powerful method of attack. This turns out to be closely related to the discovery that some equations, which are said to be completely integrable, enjoy an infinite sequence of conservation laws. Quite remarkably, both the Ernst equation (which arises several ways in the studies of exact solutions) and the NLS turn out to be completely integrable. They are therefore susceptible to solution by techniques resembling the inverse scattering transform which was originally developed to solve the Korteweg-de Vries (KdV) equation, a nonlinear partial differential equation which arises in the theory of solitons, and which is also completely integrable. Unfortunately, the solutions obtained by these methods are often not as nice as one would like. For example, in a manner analogous to the way that one obtains a multiple soliton solution of the KdV from the single soliton solution (which can be found from Lie's notion of point symmetry), one can obtain a multiple Kerr object solution, but unfortunately, this has some features which make it physically implausible. There are also various transformations (see Belinski-Zakharov transform) which can transform (for example) a vacuum solution found by other means into a new vacuum solution, or into an electrovacuum solution, or a fluid solution. These are analogous to the Bäcklund transformations known from the theory of certain partial differential equations, including some famous examples of soliton equations. This is no coincidence, since this phenomenon is also related to the notions of Noether and Lie regarding symmetry. Unfortunately, even when applied to a "well understood", globally admissible solution, these transformations often yield a solution which is poorly understood and their general interpretation is still unknown. == Existence of solutions == Given the difficulty of constructing explicit small families of solutions, much less presenting something like a "general" solution to the Einstein field equation, or even a "general" solution to the vacuum field equation, a very reasonable approach is to try to find qualitative properties which hold for all solutions, or at least for all vacuum solutions. One of the most basic questions one can ask is: do solutions exist, and if so, how many? To get started, we should adopt a suitable initial value formulation of the field equation, which gives two new systems of equations, one giving a constraint on the initial data, and the other giving a procedure for evolving this initial data into a solution. Then, one can prove that solutions exist at least locally, using ideas not terribly dissimilar from those encountered in studying other differential equations. To get some idea of "how many" solutions we might optimistically expect, we can appeal to Einstein's constraint counting method. A typical conclusion from this style of argument is that a generic vacuum solution to the Einstein field equation can be specified by giving four arbitrary functions of three variables and six arbitrary functions of two variables. These functions specify initial data, from which a unique vacuum solution can be evolved. (In contrast, the Ernst vacuums, the family of all stationary axisymmetric vacuum solutions, are specified by giving just two functions of two variables, which are not even arbitrary, but must satisfy a system of two coupled nonlinear partial differential equations. This may give some idea of how just tiny a typical "large" family of exact solutions really is, in the grand scheme of things.) However, this crude analysis falls far short of the much more difficult question of global existence of solutions. The global existence results which are known so far turn out to involve another idea. == Global stability theorems == We can imagine "disturbing" the gravitational field outside some isolated massive object by "sending in some radiation from infinity". We can ask: what happens as the incoming radiation interacts with the ambient field? In the approach of classical perturbation theory, we can start with Minkowski vacuum (or another very simple solution, such as the de Sitter lambdavacuum), introduce very small metric perturbations, and retain only terms up to some order in a suitable perturbation expansion—somewhat like evaluating a kind of Taylor series for the geometry of our spacetime. This approach is essentially the idea behind the post-Newtonian approximations used in constructing models of a gravitating system such as a binary pulsar. However, perturbation expansions are generally not reliable for questions of long-term existence and stability, in the case of nonlinear equations. The full field equation is highly nonlinear, so we really want to prove that the Minkowski vacuum is stable under small perturbations which are treated using the fully nonlinear field equation. This requires the introduction of many new ideas. The desired result, sometimes expressed by the slogan that the Minkowski vacuum is nonlinearly stable, was finally proven by Demetrios Christodoulou and Sergiu Klainerman only in 1993. Analogous results are known for lambdavac perturbations of the de Sitter lambdavacuum (Helmut Friedrich) and for electrovacuum perturbations of the Minkowski vacuum (Nina Zipser). In contrast, anti-de Sitter spacetime is known to be unstable under certain conditions. == The positive energy theorem == Another issue we might worry about is whether the net mass-energy of an isolated concentration of positive mass-energy density (and momentum) always yields a well-defined (and non-negative) net mass. This result, known as the positive energy theorem was finally proven by Richard Schoen and Shing-Tung Yau in 1979, who made an additional technical assumption about the nature of the stress–energy tensor. The original proof is very difficult; Edward Witten soon presented a much shorter "physicist's proof", which has been justified by mathematicians—using further very difficult arguments. Roger Penrose and others have also offered alternative arguments for variants of the original positive energy theorem. == See also == List of spacetimes Friedmann–Lemaître–Robertson–Walker metric Petrov classification, for algebraic symmetries of the Weyl tensor == References == == Further reading == Krasiński, A. (1997). Inhomogeneous Cosmological Models. Cambridge University Press. ISBN 0-521-48180-5. MacCallum, M. A. H. (2006). "Finding and using exact solutions of the Einstein equations". AIP Conference Proceedings. Vol. 841. pp. 129–143. arXiv:gr-qc/0601102. Bibcode:2006AIPC..841..129M. doi:10.1063/1.2218172. An up-to-date review article, but too brief, compared to the review articles by Bičák 2000 or Bonnor, Griffiths & MacCallum 1994. MacCallum, Malcolm A.H. (2013). "Exact Solutions of Einstein's equations". Scholarpedia. 8 (12): 8584. Bibcode:2013SchpJ...8.8584M. doi:10.4249/scholarpedia.8584. Rendall, Alan M. (27 September 2002). "Local and Global Existence Theorems for the Einstein Equations". Living Reviews in Relativity. 5 (1): 6. doi:10.12942/lrr-2002-6. PMC 5255525. PMID 28163637. A thorough and up-to-date review article. Friedrich, Helmut (2005). "Is general relativity 'essentially understood' ?". Annalen der Physik. 15 (1–2): 84–108. arXiv:gr-qc/0508016. Bibcode:2006AnP...518...84F. doi:10.1002/andp.200510173. S2CID 37236624. An excellent and more concise review. Bičák, Jiří (2000). "Selected Solutions of Einstein's Field Equations: Their Role in General Relativity and Astrophysics". Einstein's Field Equations and Their Physical Implications. Lecture Notes in Physics. Vol. 540. pp. 1–126. arXiv:gr-qc/0004016. doi:10.1007/3-540-46580-4_1. ISBN 978-3-540-67073-5. S2CID 119449917. An excellent modern survey. Bonnor, W.B.; Griffiths, J.B.; MacCallum, M.A.H. (1994). "Physical interpretation of vacuum solutions of Einstein's equations. Part II. Time-dependent solutions". Gen. Rel. Grav. 26 (7): 637–729. Bibcode:1994GReGr..26..687B. doi:10.1007/BF02116958. S2CID 189835151. Bonnor, W. B. (1992). "Physical interpretation of vacuum solutions of Einstein's equations. Part I. Time-independent solutions". Gen. Rel. Grav. 24 (5): 551–573. Bibcode:1992GReGr..24..551B. doi:10.1007/BF00760137. S2CID 122301194. A wise review, first of two parts. Griffiths, J. B. (1991). Colliding Plane Waves in General Relativity. Clarendon Press. ISBN 0-19-853209-1. Archived from the original on 2007-06-10. The definitive resource on colliding plane waves, but also useful to anyone interested in other exact solutions. Hoenselaers, C.; Dietz, W. (1985). Solutions of Einstein's Equations: Techniques and Results. Springer. ISBN 3-540-13366-6. Ehlers, Jürgen; Kundt, Wolfgang (1962). "Exact solutions of the gravitational field equations". In Witten, L. (ed.). Gravitation: An Introduction to Current Research. Wiley. pp. 49–101. hdl:11858/00-001M-0000-0013-5F17-4. OCLC 504779224. A classic survey, including important original work such as the symmetry classification of vacuum pp-wave spacetimes. Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius; Herlt, Eduard (2009) [2003]. Exact Solutions of Einstein's Field Equations (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-46702-5. == External links ==
Wikipedia/Exact_solutions_in_general_relativity
In differential geometry, the Ricci curvature tensor, named after Gregorio Ricci-Curbastro, is a geometric object which is determined by a choice of Riemannian or pseudo-Riemannian metric on a manifold. It can be considered, broadly, as a measure of the degree to which the geometry of a given metric tensor differs locally from that of ordinary Euclidean space or pseudo-Euclidean space. The Ricci tensor can be characterized by measurement of how a shape is deformed as one moves along geodesics in the space. In general relativity, which involves the pseudo-Riemannian setting, this is reflected by the presence of the Ricci tensor in the Raychaudhuri equation. Partly for this reason, the Einstein field equations propose that spacetime can be described by a pseudo-Riemannian metric, with a strikingly simple relationship between the Ricci tensor and the matter content of the universe. Like the metric tensor, the Ricci tensor assigns to each tangent space of the manifold a symmetric bilinear form (Besse 1987, p. 43). Broadly, one could analogize the role of the Ricci curvature in Riemannian geometry to that of the Laplacian in the analysis of functions; in this analogy, the Riemann curvature tensor, of which the Ricci curvature is a natural by-product, would correspond to the full matrix of second derivatives of a function. However, there are other ways to draw the same analogy. For three-dimensional manifolds, the Ricci tensor contains all of the information which in higher dimensions is encoded by the more complicated Riemann curvature tensor. In part, this simplicity allows for the application of many geometric and analytic tools, which led to the solution of the Poincaré conjecture through the work of Richard S. Hamilton and Grigori Perelman. In differential geometry, the determination of lower bounds on the Ricci tensor on a Riemannian manifold would allow one to extract global geometric and topological information by comparison (cf. comparison theorem) with the geometry of a constant curvature space form. This is since lower bounds on the Ricci tensor can be successfully used in studying the length functional in Riemannian geometry, as first shown in 1941 via Myers's theorem. One common source of the Ricci tensor is that it arises whenever one commutes the covariant derivative with the tensor Laplacian. This, for instance, explains its presence in the Bochner formula, which is used ubiquitously in Riemannian geometry. For example, this formula explains why the gradient estimates due to Shing-Tung Yau (and their developments such as the Cheng-Yau and Li-Yau inequalities) nearly always depend on a lower bound for the Ricci curvature. In 2007, John Lott, Karl-Theodor Sturm, and Cedric Villani demonstrated decisively that lower bounds on Ricci curvature can be understood entirely in terms of the metric space structure of a Riemannian manifold, together with its volume form. This established a deep link between Ricci curvature and Wasserstein geometry and optimal transport, which is presently the subject of much research. == Definition == Suppose that ( M , g ) {\displaystyle \left(M,g\right)} is an n {\displaystyle n} -dimensional Riemannian or pseudo-Riemannian manifold, equipped with its Levi-Civita connection ∇ {\displaystyle \nabla } . The Riemann curvature of M {\displaystyle M} is a map which takes smooth vector fields X {\displaystyle X} , Y {\displaystyle Y} , and Z {\displaystyle Z} , and returns the vector field R ( X , Y ) Z := ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [ X , Y ] Z {\displaystyle R(X,Y)Z:=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z} on vector fields X , Y , Z {\displaystyle X,Y,Z} . Since R {\displaystyle R} is a tensor field, for each point p ∈ M {\displaystyle p\in M} , it gives rise to a (multilinear) map: R p : T p M × T p M × T p M → T p M . {\displaystyle \operatorname {R} _{p}:T_{p}M\times T_{p}M\times T_{p}M\to T_{p}M.} Define for each point p ∈ M {\displaystyle p\in M} the map Ric p : T p M × T p M → R {\displaystyle \operatorname {Ric} _{p}:T_{p}M\times T_{p}M\to \mathbb {R} } by Ric p ⁡ ( Y , Z ) := tr ⁡ ( X ↦ R p ⁡ ( X , Y ) Z ) . {\displaystyle \operatorname {Ric} _{p}(Y,Z):=\operatorname {tr} {\big (}X\mapsto \operatorname {R} _{p}(X,Y)Z{\big )}.} That is, having fixed Y {\displaystyle Y} and Z {\displaystyle Z} , then for any orthonormal basis v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} of the vector space T p M {\displaystyle T_{p}M} , one has Ric p ⁡ ( Y , Z ) = ∑ i = 1 ⟨ R p ⁡ ( v i , Y ) Z , v i ⟩ . {\displaystyle \operatorname {Ric} _{p}(Y,Z)=\sum _{i=1}\langle \operatorname {R} _{p}(v_{i},Y)Z,v_{i}\rangle .} It is a standard exercise of (multi)linear algebra to verify that this definition does not depend on the choice of the basis v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} . In abstract index notation, R i c a b = R c b c a = R c a c b . {\displaystyle \mathrm {Ric} _{ab}=\mathrm {R} ^{c}{}_{bca}=\mathrm {R} ^{c}{}_{acb}.} Sign conventions. Note that some sources define R ( X , Y ) Z {\displaystyle R(X,Y)Z} to be what would here be called − R ( X , Y ) Z ; {\displaystyle -R(X,Y)Z;} they would then define Ric p {\displaystyle \operatorname {Ric} _{p}} as − tr ⁡ ( X ↦ R p ⁡ ( X , Y ) Z ) . {\displaystyle -\operatorname {tr} (X\mapsto \operatorname {R} _{p}(X,Y)Z).} Although sign conventions differ about the Riemann tensor, they do not differ about the Ricci tensor. === Definition via local coordinates on a smooth manifold === Let ( M , g ) {\displaystyle \left(M,g\right)} be a smooth Riemannian or pseudo-Riemannian n {\displaystyle n} -manifold. Given a smooth chart ( U , φ ) {\displaystyle \left(U,\varphi \right)} one then has functions g i j : φ ( U ) → R {\displaystyle g_{ij}:\varphi (U)\rightarrow \mathbb {R} } and g i j : φ ( U ) → R {\displaystyle g^{ij}:\varphi (U)\rightarrow \mathbb {R} } for each i , j = 1 , … , n {\displaystyle i,j=1,\ldots ,n} which satisfy ∑ k = 1 n g i k ( x ) g k j ( x ) = δ j i = { 1 i = j 0 i ≠ j {\displaystyle \sum _{k=1}^{n}g^{ik}(x)g_{kj}(x)=\delta _{j}^{i}={\begin{cases}1&i=j\\0&i\neq j\end{cases}}} for all x ∈ φ ( U ) {\displaystyle x\in \varphi (U)} . The latter shows that, expressed as matrices, g i j ( x ) = ( g − 1 ) i j ( x ) {\displaystyle g^{ij}(x)=(g^{-1})_{ij}(x)} . The functions g i j {\displaystyle g_{ij}} are defined by evaluating g {\displaystyle g} on coordinate vector fields, while the functions g i j {\displaystyle g^{ij}} are defined so that, as a matrix-valued function, they provide an inverse to the matrix-valued function x ↦ g i j ( x ) {\displaystyle x\mapsto g_{ij}(x)} . Now define, for each a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , i {\displaystyle i} , and j {\displaystyle j} between 1 and n {\displaystyle n} , the functions Γ a b c := 1 2 ∑ d = 1 n ( ∂ g b d ∂ x a + ∂ g a d ∂ x b − ∂ g a b ∂ x d ) g c d R i j := ∑ a = 1 n ∂ Γ i j a ∂ x a − ∑ a = 1 n ∂ Γ a i a ∂ x j + ∑ a = 1 n ∑ b = 1 n ( Γ a b a Γ i j b − Γ i b a Γ a j b ) {\displaystyle {\begin{aligned}\Gamma _{ab}^{c}&:={\frac {1}{2}}\sum _{d=1}^{n}\left({\frac {\partial g_{bd}}{\partial x^{a}}}+{\frac {\partial g_{ad}}{\partial x^{b}}}-{\frac {\partial g_{ab}}{\partial x^{d}}}\right)g^{cd}\\R_{ij}&:=\sum _{a=1}^{n}{\frac {\partial \Gamma _{ij}^{a}}{\partial x^{a}}}-\sum _{a=1}^{n}{\frac {\partial \Gamma _{ai}^{a}}{\partial x^{j}}}+\sum _{a=1}^{n}\sum _{b=1}^{n}\left(\Gamma _{ab}^{a}\Gamma _{ij}^{b}-\Gamma _{ib}^{a}\Gamma _{aj}^{b}\right)\end{aligned}}} as maps φ : U → R {\displaystyle \varphi :U\rightarrow \mathbb {R} } . Now let ( U , φ ) {\displaystyle \left(U,\varphi \right)} and ( V , ψ ) {\displaystyle \left(V,\psi \right)} be two smooth charts with U ∩ V ≠ ∅ {\displaystyle U\cap V\neq \emptyset } . Let R i j : φ ( U ) → R {\displaystyle R_{ij}:\varphi (U)\rightarrow \mathbb {R} } be the functions computed as above via the chart ( U , φ ) {\displaystyle \left(U,\varphi \right)} and let r i j : ψ ( V ) → R {\displaystyle r_{ij}:\psi (V)\rightarrow \mathbb {R} } be the functions computed as above via the chart ( V , ψ ) {\displaystyle \left(V,\psi \right)} . Then one can check by a calculation with the chain rule and the product rule that R i j ( x ) = ∑ k , l = 1 n r k l ( ψ ∘ φ − 1 ( x ) ) D i | x ( ψ ∘ φ − 1 ) k D j | x ( ψ ∘ φ − 1 ) l . {\displaystyle R_{ij}(x)=\sum _{k,l=1}^{n}r_{kl}\left(\psi \circ \varphi ^{-1}(x)\right)D_{i}{\Big |}_{x}\left(\psi \circ \varphi ^{-1}\right)^{k}D_{j}{\Big |}_{x}\left(\psi \circ \varphi ^{-1}\right)^{l}.} where D i {\displaystyle D_{i}} is the first derivative along i {\displaystyle i} th direction of R n {\displaystyle \mathbb {R} ^{n}} . This shows that the following definition does not depend on the choice of ( U , φ ) {\displaystyle \left(U,\varphi \right)} . For any p ∈ U {\displaystyle p\in U} , define a bilinear map Ric p : T p M × T p M → R {\displaystyle \operatorname {Ric} _{p}:T_{p}M\times T_{p}M\rightarrow \mathbb {R} } by ( X , Y ) ∈ T p M × T p M ↦ Ric p ⁡ ( X , Y ) = ∑ i , j = 1 n R i j ( φ ( x ) ) X i ( p ) Y j ( p ) , {\displaystyle (X,Y)\in T_{p}M\times T_{p}M\mapsto \operatorname {Ric} _{p}(X,Y)=\sum _{i,j=1}^{n}R_{ij}(\varphi (x))X^{i}(p)Y^{j}(p),} where X 1 , … , X n {\displaystyle X^{1},\ldots ,X^{n}} and Y 1 , … , Y n {\displaystyle Y^{1},\ldots ,Y^{n}} are the components of the tangent vectors at p {\displaystyle p} in X {\displaystyle X} and Y {\displaystyle Y} relative to the coordinate vector fields of ( U , φ ) {\displaystyle \left(U,\varphi \right)} . It is common to abbreviate the above formal presentation in the following style: The final line includes the demonstration that the bilinear map Ric is well-defined, which is much easier to write out with the informal notation. === Comparison of the definitions === The two above definitions are identical. The formulas defining Γ i j k {\displaystyle \Gamma _{ij}^{k}} and R i j {\displaystyle R_{ij}} in the coordinate approach have an exact parallel in the formulas defining the Levi-Civita connection, and the Riemann curvature via the Levi-Civita connection. Arguably, the definitions directly using local coordinates are preferable, since the "crucial property" of the Riemann tensor mentioned above requires M {\displaystyle M} to be Hausdorff in order to hold. By contrast, the local coordinate approach only requires a smooth atlas. It is also somewhat easier to connect the "invariance" philosophy underlying the local approach with the methods of constructing more exotic geometric objects, such as spinor fields. The complicated formula defining R i j {\displaystyle R_{ij}} in the introductory section is the same as that in the following section. The only difference is that terms have been grouped so that it is easy to see that R i j = R j i . {\displaystyle R_{ij}=R_{ji}.} == Properties == As can be seen from the symmetries of the Riemann curvature tensor, the Ricci tensor of a Riemannian manifold is symmetric, in the sense that Ric ⁡ ( X , Y ) = Ric ⁡ ( Y , X ) {\displaystyle \operatorname {Ric} (X,Y)=\operatorname {Ric} (Y,X)} for all X , Y ∈ T p M . {\displaystyle X,Y\in T_{p}M.} It thus follows linear-algebraically that the Ricci tensor is completely determined by knowing the quantity Ric ⁡ ( X , X ) {\displaystyle \operatorname {Ric} (X,X)} for all vectors X {\displaystyle X} of unit length. This function on the set of unit tangent vectors is often also called the Ricci curvature, since knowing it is equivalent to knowing the Ricci curvature tensor. The Ricci curvature is determined by the sectional curvatures of a Riemannian manifold, but generally contains less information. Indeed, if ξ {\displaystyle \xi } is a vector of unit length on a Riemannian n {\displaystyle n} -manifold, then Ric ⁡ ( ξ , ξ ) {\displaystyle \operatorname {Ric} (\xi ,\xi )} is precisely ( n − 1 ) {\displaystyle (n-1)} times the average value of the sectional curvature, taken over all the 2-planes containing ξ {\displaystyle \xi } . There is an ( n − 2 ) {\displaystyle (n-2)} -dimensional family of such 2-planes, and so only in dimensions 2 and 3 does the Ricci tensor determine the full curvature tensor. A notable exception is when the manifold is given a priori as a hypersurface of Euclidean space. The second fundamental form, which determines the full curvature via the Gauss–Codazzi equation, is itself determined by the Ricci tensor and the principal directions of the hypersurface are also the eigendirections of the Ricci tensor. The tensor was introduced by Ricci for this reason. As can be seen from the second Bianchi identity, one has div ⁡ Ric = 1 2 d R , {\displaystyle \operatorname {div} \operatorname {Ric} ={\frac {1}{2}}dR,} where R {\displaystyle R} is the scalar curvature, defined in local coordinates as g i j R i j . {\displaystyle g^{ij}R_{ij}.} This is often called the contracted second Bianchi identity. == Direct geometric meaning == Near any point p {\displaystyle p} in a Riemannian manifold ( M , g ) {\displaystyle \left(M,g\right)} , one can define preferred local coordinates, called geodesic normal coordinates. These are adapted to the metric so that geodesics through p {\displaystyle p} correspond to straight lines through the origin, in such a manner that the geodesic distance from p {\displaystyle p} corresponds to the Euclidean distance from the origin. In these coordinates, the metric tensor is well-approximated by the Euclidean metric, in the precise sense that g i j = δ i j + O ( | x | 2 ) . {\displaystyle g_{ij}=\delta _{ij}+O\left(|x|^{2}\right).} In fact, by taking the Taylor expansion of the metric applied to a Jacobi field along a radial geodesic in the normal coordinate system, one has g i j = δ i j − 1 3 R i k j l x k x l + O ( | x | 3 ) . {\displaystyle g_{ij}=\delta _{ij}-{\frac {1}{3}}R_{ikjl}x^{k}x^{l}+O\left(|x|^{3}\right).} In these coordinates, the metric volume element then has the following expansion at p: d μ g = [ 1 − 1 6 R j k x j x k + O ( | x | 3 ) ] d μ Euclidean , {\displaystyle d\mu _{g}=\left[1-{\frac {1}{6}}R_{jk}x^{j}x^{k}+O\left(|x|^{3}\right)\right]d\mu _{\text{Euclidean}},} which follows by expanding the square root of the determinant of the metric. Thus, if the Ricci curvature Ric ⁡ ( ξ , ξ ) {\displaystyle \operatorname {Ric} (\xi ,\xi )} is positive in the direction of a vector ξ {\displaystyle \xi } , the conical region in M {\displaystyle M} swept out by a tightly focused family of geodesic segments of length ε {\displaystyle \varepsilon } emanating from p {\displaystyle p} , with initial velocity inside a small cone about ξ {\displaystyle \xi } , will have smaller volume than the corresponding conical region in Euclidean space, at least provided that ε {\displaystyle \varepsilon } is sufficiently small. Similarly, if the Ricci curvature is negative in the direction of a given vector ξ {\displaystyle \xi } , such a conical region in the manifold will instead have larger volume than it would in Euclidean space. The Ricci curvature is essentially an average of curvatures in the planes including ξ {\displaystyle \xi } . Thus if a cone emitted with an initially circular (or spherical) cross-section becomes distorted into an ellipse (ellipsoid), it is possible for the volume distortion to vanish if the distortions along the principal axes counteract one another. The Ricci curvature would then vanish along ξ {\displaystyle \xi } . In physical applications, the presence of a nonvanishing sectional curvature does not necessarily indicate the presence of any mass locally; if an initially circular cross-section of a cone of worldlines later becomes elliptical, without changing its volume, then this is due to tidal effects from a mass at some other location. == Applications == Ricci curvature plays an important role in general relativity, where it is the key term in the Einstein field equations. Ricci curvature also appears in the Ricci flow equation, first introduced by Richard S. Hamilton in 1982, where certain one-parameter families of Riemannian metrics are singled out as solutions of a geometrically-defined partial differential equation. In harmonic local coordinates the Ricci tensor can be expressed as (Chow & Knopf 2004, Lemma 3.32). R i j = − 1 2 Δ ( g i j ) + lower-order terms , {\displaystyle R_{ij}=-{\frac {1}{2}}\Delta \left(g_{ij}\right)+{\text{lower-order terms}},} where g i j {\displaystyle g_{ij}} are the components of the metric tensor and Δ {\displaystyle \Delta } is the Laplace–Beltrami operator. This fact motivates the introduction of the Ricci flow equation as a natural extension of the heat equation for the metric. Since heat tends to spread through a solid until the body reaches an equilibrium state of constant temperature, if one is given a manifold, the Ricci flow may be hoped to produce an 'equilibrium' Riemannian metric which is Einstein or of constant curvature. However, such a clean "convergence" picture cannot be achieved since many manifolds cannot support such metrics. A detailed study of the nature of solutions of the Ricci flow, due principally to Hamilton and Grigori Perelman, shows that the types of "singularities" that occur along a Ricci flow, corresponding to the failure of convergence, encodes deep information about 3-dimensional topology. The culmination of this work was a proof of the geometrization conjecture first proposed by William Thurston in the 1970s, which can be thought of as a classification of compact 3-manifolds. On a Kähler manifold, the Ricci curvature determines the first Chern class of the manifold (mod torsion). However, the Ricci curvature has no analogous topological interpretation on a generic Riemannian manifold. == Global geometry and topology == Here is a short list of global results concerning manifolds with positive Ricci curvature; see also classical theorems of Riemannian geometry. Briefly, positive Ricci curvature of a Riemannian manifold has strong topological consequences, while (for dimension at least 3), negative Ricci curvature has no topological implications. (The Ricci curvature is said to be positive if the Ricci curvature function Ric ⁡ ( ξ , ξ ) {\displaystyle \operatorname {Ric} (\xi ,\xi )} is positive on the set of non-zero tangent vectors ξ {\displaystyle \xi } .) Some results are also known for pseudo-Riemannian manifolds. Myers' theorem (1941) states that if the Ricci curvature is bounded from below on a complete Riemannian n-manifold by ( n − 1 ) k > 0 {\displaystyle (n-1)k>0} , then the manifold has diameter ≤ π / k {\displaystyle \leq \pi /{\sqrt {k}}} . By a covering-space argument, it follows that any compact manifold of positive Ricci curvature must have finite fundamental group. Cheng (1975) showed that, in this setting, equality in the diameter inequality occurs if only if the manifold is isometric to a sphere of a constant curvature k {\displaystyle k} . The Bishop–Gromov inequality states that if a complete n {\displaystyle n} -dimensional Riemannian manifold has non-negative Ricci curvature, then the volume of a geodesic ball is less than or equal to the volume of a geodesic ball of the same radius in Euclidean n {\displaystyle n} -space. Moreover, if v p ( R ) {\displaystyle v_{p}(R)} denotes the volume of the ball with center p {\displaystyle p} and radius R {\displaystyle R} in the manifold and V ( R ) = c n R n {\displaystyle V(R)=c_{n}R^{n}} denotes the volume of the ball of radius R {\displaystyle R} in Euclidean n {\displaystyle n} -space then the function v p ( R ) / V ( R ) {\displaystyle v_{p}(R)/V(R)} is nonincreasing. This can be generalized to any lower bound on the Ricci curvature (not just nonnegativity), and is the key point in the proof of Gromov's compactness theorem.) The Cheeger–Gromoll splitting theorem states that if a complete Riemannian manifold ( M , g ) {\displaystyle \left(M,g\right)} with Ric ≥ 0 {\displaystyle \operatorname {Ric} \geq 0} contains a line, meaning a geodesic γ : R → M {\displaystyle \gamma :\mathbb {R} \to M} such that d ( γ ( u ) , γ ( v ) ) = | u − v | {\displaystyle d(\gamma (u),\gamma (v))=\left|u-v\right|} for all u , v ∈ R {\displaystyle u,v\in \mathbb {R} } , then it is isometric to a product space R × L {\displaystyle \mathbb {R} \times L} . Consequently, a complete manifold of positive Ricci curvature can have at most one topological end. The theorem is also true under some additional hypotheses for complete Lorentzian manifolds (of metric signature ( + − − … ) {\displaystyle \left(+--\ldots \right)} ) with non-negative Ricci tensor (Galloway 2000). Hamilton's first convergence theorem for Ricci flow has, as a corollary, that the only compact 3-manifolds which have Riemannian metrics of positive Ricci curvature are the quotients of the 3-sphere by discrete subgroups of SO(4) which act properly discontinuously. He later extended this to allow for nonnegative Ricci curvature. In particular, the only simply-connected possibility is the 3-sphere itself. These results, particularly Myers' and Hamilton's, show that positive Ricci curvature has strong topological consequences. By contrast, excluding the case of surfaces, negative Ricci curvature is now known to have no topological implications; Lohkamp (1994) has shown that any manifold of dimension greater than two admits a complete Riemannian metric of negative Ricci curvature. In the case of two-dimensional manifolds, negativity of the Ricci curvature is synonymous with negativity of the Gaussian curvature, which has very clear topological implications. There are very few two-dimensional manifolds which fail to admit Riemannian metrics of negative Gaussian curvature. == Behavior under conformal rescaling == If the metric g {\displaystyle g} is changed by multiplying it by a conformal factor e 2 f {\displaystyle e^{2f}} , the Ricci tensor of the new, conformally-related metric g ~ = e 2 f g {\displaystyle {\tilde {g}}=e^{2f}g} is given (Besse 1987, p. 59) by Ric ~ = Ric + ( 2 − n ) [ ∇ d f − d f ⊗ d f ] + [ Δ f − ( n − 2 ) ‖ d f ‖ 2 ] g , {\displaystyle {\widetilde {\operatorname {Ric} }}=\operatorname {Ric} +(2-n)\left[\nabla df-df\otimes df\right]+\left[\Delta f-(n-2)\|df\|^{2}\right]g,} where Δ = ∗ d ∗ d {\displaystyle \Delta =*d*d} is the (positive spectrum) Hodge Laplacian, i.e., the opposite of the usual trace of the Hessian. In particular, given a point p {\displaystyle p} in a Riemannian manifold, it is always possible to find metrics conformal to the given metric g {\displaystyle g} for which the Ricci tensor vanishes at p {\displaystyle p} . Note, however, that this is only pointwise assertion; it is usually impossible to make the Ricci curvature vanish identically on the entire manifold by a conformal rescaling. For two dimensional manifolds, the above formula shows that if f {\displaystyle f} is a harmonic function, then the conformal scaling g ↦ e 2 f g {\displaystyle g\mapsto e^{2f}g} does not change the Ricci tensor (although it still changes its trace with respect to the metric unless f = 0 {\displaystyle f=0} . == Trace-free Ricci tensor == In Riemannian geometry and pseudo-Riemannian geometry, the trace-free Ricci tensor (also called traceless Ricci tensor) of a Riemannian or pseudo-Riemannian n {\displaystyle n} -manifold ( M , g ) {\displaystyle \left(M,g\right)} is the tensor defined by Z = Ric − 1 n R g , {\displaystyle Z=\operatorname {Ric} -{\frac {1}{n}}Rg,} where Ric {\displaystyle \operatorname {Ric} } and R {\displaystyle R} denote the Ricci curvature and scalar curvature of g {\displaystyle g} . The name of this object reflects the fact that its trace automatically vanishes: tr g ⁡ Z ≡ g a b Z a b = 0. {\displaystyle \operatorname {tr} _{g}Z\equiv g^{ab}Z_{ab}=0.} However, it is quite an important tensor since it reflects an "orthogonal decomposition" of the Ricci tensor. === The orthogonal decomposition of the Ricci tensor === The following, not so trivial, property is Ric = Z + 1 n R g . {\displaystyle \operatorname {Ric} =Z+{\frac {1}{n}}Rg.} It is less immediately obvious that the two terms on the right hand side are orthogonal to each other: ⟨ Z , 1 n R g ⟩ g ≡ g a b ( R a b − 1 n R g a b ) = 0. {\displaystyle \left\langle Z,{\frac {1}{n}}Rg\right\rangle _{g}\equiv g^{ab}\left(R_{ab}-{\frac {1}{n}}Rg_{ab}\right)=0.} An identity which is intimately connected with this (but which could be proved directly) is that | Ric | g 2 = | Z | g 2 + 1 n R 2 . {\displaystyle \left|\operatorname {Ric} \right|_{g}^{2}=|Z|_{g}^{2}+{\frac {1}{n}}R^{2}.} === The trace-free Ricci tensor and Einstein metrics === By taking a divergence, and using the contracted Bianchi identity, one sees that Z = 0 {\displaystyle Z=0} implies 1 2 d R − 1 n d R = 0 {\textstyle {\frac {1}{2}}dR-{\frac {1}{n}}dR=0} . So, provided that n ≥ 3 and M {\displaystyle M} is connected, the vanishing of Z {\displaystyle Z} implies that the scalar curvature is constant. One can then see that the following are equivalent: Z = 0 {\displaystyle Z=0} Ric = λ g {\displaystyle \operatorname {Ric} =\lambda g} for some number λ {\displaystyle \lambda } Ric = 1 n R g {\displaystyle \operatorname {Ric} ={\frac {1}{n}}Rg} In the Riemannian setting, the above orthogonal decomposition shows that R 2 = n | Ric ⁡ | 2 {\displaystyle R^{2}=n|\operatorname {Ric} |^{2}} is also equivalent to these conditions. In the pseudo-Riemmannian setting, by contrast, the condition | Z | g 2 = 0 {\displaystyle |Z|_{g}^{2}=0} does not necessarily imply Z = 0 , {\displaystyle Z=0,} so the most that one can say is that these conditions imply R 2 = n | Ric | g 2 . {\displaystyle R^{2}=n\left|\operatorname {Ric} \right|_{g}^{2}.} In particular, the vanishing of trace-free Ricci tensor characterizes Einstein manifolds, as defined by the condition Ric = λ g {\displaystyle \operatorname {Ric} =\lambda g} for a number λ . {\displaystyle \lambda .} In general relativity, this equation states that ( M , g ) {\displaystyle \left(M,g\right)} is a solution of Einstein's vacuum field equations with cosmological constant. == Kähler manifolds == On a Kähler manifold X {\displaystyle X} , the Ricci curvature determines the curvature form of the canonical line bundle (Moroianu 2007, Chapter 12). The canonical line bundle is the top exterior power of the bundle of holomorphic Kähler differentials: κ = ⋀ n Ω X . {\displaystyle \kappa ={\textstyle \bigwedge }^{n}~\Omega _{X}.} The Levi-Civita connection corresponding to the metric on X {\displaystyle X} gives rise to a connection on κ {\displaystyle \kappa } . The curvature of this connection is the 2-form defined by ρ ( X , Y ) = def Ric ⁡ ( J X , Y ) {\displaystyle \rho (X,Y)\;{\stackrel {\text{def}}{=}}\;\operatorname {Ric} (JX,Y)} where J {\displaystyle J} is the complex structure map on the tangent bundle determined by the structure of the Kähler manifold. The Ricci form is a closed 2-form. Its cohomology class is, up to a real constant factor, the first Chern class of the canonical bundle, and is therefore a topological invariant of X {\displaystyle X} (for compact X {\displaystyle X} ) in the sense that it depends only on the topology of X {\displaystyle X} and the homotopy class of the complex structure. Conversely, the Ricci form determines the Ricci tensor by Ric ⁡ ( X , Y ) = ρ ( X , J Y ) . {\displaystyle \operatorname {Ric} (X,Y)=\rho (X,JY).} In local holomorphic coordinates z α {\displaystyle z^{\alpha }} , the Ricci form is given by ρ = − i ∂ ∂ ¯ log ⁡ det ( g α β ¯ ) {\displaystyle \rho =-i\partial {\overline {\partial }}\log \det \left(g_{\alpha {\overline {\beta }}}\right)} where ∂ is the Dolbeault operator and g α β ¯ = g ( ∂ ∂ z α , ∂ ∂ z ¯ β ) . {\displaystyle g_{\alpha {\overline {\beta }}}=g\left({\frac {\partial }{\partial z^{\alpha }}},{\frac {\partial }{\partial {\overline {z}}^{\beta }}}\right).} If the Ricci tensor vanishes, then the canonical bundle is flat, so the structure group can be locally reduced to a subgroup of the special linear group S L ( n ; C ) {\displaystyle SL(n;\mathbb {C} )} . However, Kähler manifolds already possess holonomy in U ( n ) {\displaystyle U(n)} , and so the (restricted) holonomy of a Ricci-flat Kähler manifold is contained in S U ( n ) {\displaystyle SU(n)} . Conversely, if the (restricted) holonomy of a 2 n {\displaystyle n} -dimensional Riemannian manifold is contained in S U ( n ) {\displaystyle SU(n)} , then the manifold is a Ricci-flat Kähler manifold (Kobayashi & Nomizu 1996, IX, §4). == Generalization to affine connections == The Ricci tensor can also be generalized to arbitrary affine connections, where it is an invariant that plays an especially important role in the study of projective geometry (geometry associated to unparameterized geodesics) (Nomizu & Sasaki 1994). If ∇ {\displaystyle \nabla } denotes an affine connection, then the curvature tensor R {\displaystyle R} is the (1,3)-tensor defined by R ( X , Y ) Z = ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [ X , Y ] Z {\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z} for any vector fields X , Y , Z {\displaystyle X,Y,Z} . The Ricci tensor is defined to be the trace: ric ⁡ ( X , Y ) = tr ⁡ ( Z ↦ R ( Z , X ) Y ) . {\displaystyle \operatorname {ric} (X,Y)=\operatorname {tr} {\big (}Z\mapsto R(Z,X)Y{\big )}.} In this more general situation, the Ricci tensor is symmetric if and only if there exists locally a parallel volume form for the connection. == Discrete Ricci curvature == Notions of Ricci curvature on discrete manifolds have been defined on graphs and networks, where they quantify local divergence properties of edges. Ollivier's Ricci curvature is defined using optimal transport theory. A different (and earlier) notion, Forman's Ricci curvature, is based on topological arguments. == See also == == Footnotes == == References == Besse, A.L. (1987), Einstein manifolds, Springer, ISBN 978-3-540-15279-8. Chow, Bennet & Knopf, Dan (2004), The Ricci Flow: an introduction, American Mathematical Society, ISBN 0-8218-3515-7. Eisenhart, L.P. (1949), Riemannian geometry, Princeton Univ. Press. Forman (2003), "Bochner's Method for Cell Complexes and Combinatorial Ricci Curvature", Discrete & Computational Geometry, 29 (3): 323–374. doi:10.1007/s00454-002-0743-x. ISSN 1432-0444 Galloway, Gregory (2000), "Maximum Principles for Null Hypersurfaces and Null Splitting Theorems", Annales de l'Institut Henri Poincaré A, 1 (3): 543–567, arXiv:math/9909158, Bibcode:2000AnHP....1..543G, doi:10.1007/s000230050006, S2CID 9619157. Kobayashi, S.; Nomizu, K. (1963), Foundations of Differential Geometry, Volume 1, Interscience. Kobayashi, Shoshichi; Nomizu, Katsumi (1996), Foundations of Differential Geometry, Vol. 2, Wiley-Interscience, ISBN 978-0-471-15732-8. Lohkamp, Joachim (1994), "Metrics of negative Ricci curvature", Annals of Mathematics, Second Series, 140 (3), Annals of Mathematics: 655–683, doi:10.2307/2118620, ISSN 0003-486X, JSTOR 2118620, MR 1307899. Moroianu, Andrei (2007), Lectures on Kähler geometry, London Mathematical Society Student Texts, vol. 69, Cambridge University Press, arXiv:math/0402223, doi:10.1017/CBO9780511618666, ISBN 978-0-521-68897-0, MR 2325093, S2CID 209824092 Nomizu, Katsumi; Sasaki, Takeshi (1994), Affine differential geometry, Cambridge University Press, ISBN 978-0-521-44177-3. Ollivier, Yann (2009), "Ricci curvature of Markov chains on metric spaces", Journal of Functional Analysis 256 (3): 810–864. doi:10.1016/j.jfa.2008.11.001. ISSN 0022-1236 Ricci, G. (1903–1904), "Direzioni e invarianti principali in una varietà qualunque", Atti R. Inst. Veneto, 63 (2): 1233–1239. L.A. Sidorov (2001) [1994], "Ricci tensor", Encyclopedia of Mathematics, EMS Press L.A. Sidorov (2001) [1994], "Ricci curvature", Encyclopedia of Mathematics, EMS Press Najman, Laurent and Romon, Pascal (2017): Modern approaches to discrete curvature, Springer (Cham), Lecture notes in mathematics == External links == Z. Shen, C. Sormani "The Topology of Open Manifolds with Nonnegative Ricci Curvature" (a survey) G. Wei, "Manifolds with A Lower Ricci Curvature Bound" (a survey)
Wikipedia/Ricci_tensor
A Treatise on the Analytical Dynamics of Particles and Rigid Bodies is a treatise and textbook on analytical dynamics by British mathematician Sir Edmund Taylor Whittaker. Initially published in 1904 by the Cambridge University Press, the book focuses heavily on the three-body problem and has since gone through four editions and has been translated to German and Russian. Considered a landmark book in English mathematics and physics, the treatise presented what was the state-of-the-art at the time of publication and, remaining in print for more than a hundred years, it is considered a classic textbook in the subject. In addition to the original editions published in 1904, 1917, 1927, and 1937, a reprint of the fourth edition was released in 1989 with a new foreword by William Hunter McCrea. The book was very successful and received many positive reviews. A 2014 "biography" of the book's development wrote that it had "remarkable longevity" and noted that the book remains more than historically influential. Among many others, G. H. Bryan, E. B. Wilson, P. Jourdain, G. D. Birkhoff, T. M. Cherry, and R. Thiele have reviewed the book. The 1904 review of the first edition by G. H. Bryan, who wrote reviews for the first two editions, sparked controversy among Cambridge University professors related to the use of Cambridge Tripos problems in textbooks. The book is mentioned in other textbooks as well, including Classical Mechanics, where Herbert Goldstein argued in 1980 that, although the book is outdated, it remains "a practically unique source for the discussion of many specialized topics." == Background == Whittaker was 31 years old and working as a lecturer at Trinity College, Cambridge when the book was first published, less than ten years after he graduated from Cambridge University in 1895. Whittaker was branded Second Wrangler in his Cambridge Tripos examination upon graduation in 1895 and elected as a Fellow of Trinity College, Cambridge the next year, where he remained as a lecturer until 1906. Whittaker published his first major work, the celebrated mathematics textbook A Course of Modern Analysis, in 1902, just two years before Analytical Dynamics. Following the success of these works, Whittaker was appointed Royal Astronomer of Ireland in 1906, which came with the role of Andrews Professor of Astronomy at Trinity College, Dublin. The second half of the treatise is an expanded version of a report Whittaker completed on the three-body problem at the turn of the century at the request of the British Science Association (then called the British Association for the Advancement of Science). In 1898, the council of the British Association passed a resolution that "Mr E. T. Whittaker be requested to draw up a report on the planetary theory". A year later, Whittaker delivered his report, titled “Report on the progress of the solution of the problem of three bodies”, in a lecture to the Association, who published it in 1900. He changed the name from the original "report on the planetary theory" to, in his own words, show "more definitely the aim of the Report", which covered the advances in theoretical astronomy that occurred between 1868 and 1898. == Content == The book is a thorough treatment of analytical dynamics, covering topics in Hamiltonian mechanics and celestial mechanics and the three-body problem. It has been noted that the book can be divided naturally into two parts: Part one, consisting of the twelve chapters, covers the basic principles of dynamics, giving a "state-of-the-art introduction to the principles of dynamics as they stood in the first years of the twentieth century", while part two, consisting of the final four chapters, is based on Whittaker's report on the three-body problem. While the first part remained mostly constant throughout the book's multiple editions, the second part was expanded considerably in the second and third editions. === History === The book's structure remained constant throughout its development, with fifteen total chapters, though the second and third editions added new sections throughout. Among other changes to the book, Whittaker expanded chapters fifteen and sixteen considerably and renamed chapters nine and sixteen. The title of chapter nine, The Principles of Least Action and Least Curvature, was The principles of Hamilton and Gauss before being renamed in the second edition and the title of chapter sixteen, Integration by series, was Integration by trigonometric series before being renamed for the third edition. The first edition had 188 total consecutively numbered sections, which increased in the second and third editions of the book. Among the most heavily altered, chapter fifteen went from fourteen sections to twenty-two while chapter sixteen doubled its section count from nine to eighteen. Most of the differences between the second and third editions were adding outlines of and references to works published after the book's second edition. The edition included a major rewrite of chapters fifteen and sixteen to update the book considering developments that had occurred in the eleven years since the publication of the second edition. The first fourteen chapters of the third edition were photolithographically reproduced from the second edition, with some corrections and added references. The new material contained a section on Synge’s geometry of dynamics and tensor analysis. The fourth edition, published in 1937, differed from the third edition only in correcting some errors and supplying references to works published after the previous edition; aside from a new foreword by William Hunter McCrea in a 1989 reprint, the volume represented the book in its ultimate form. === Synopsis === Part I of the book has been said to give a "state-of-the-art introduction to the principles of dynamics as they were understood in the first years of the twentieth century". The first chapter, on kinematic preliminaries, discusses the mathematical formalism required for describing the motion of rigid bodies. The second chapter begins the advanced study of mechanics, with topics beginning with relatively simple concepts such as motion and rest, frame of reference, mass, force, and work before discussing kinetic energy, introducing Lagrangian mechanics, and discussing impulsive motions. Chapter three discusses the integration of equations of motion at length, the conservation of energy and its role in reducing degrees of freedom, and separation of variables. Chapters one through three focus only on systems of point masses. The first concrete examples of dynamic systems, including the pendulum, central forces, and motion on a surface, are introduced in chapter four, where the methods of the previous chapters are employed in solving problems. Chapter five introduces the moment of inertia and angular momentum to prepare for the study of the dynamics of rigid bodies. Chapter six focuses on the solutions of problems in rigid body dynamics, with exercises including "motion of a rod on which an insect is crawling" and the motion of a spinning top. Chapter seven covers the theory of vibrations, a standard component of mechanics textbooks. Chapter eight introduces dissipative and nonholonomic systems, up to which point all the systems discussed were holonomic and conservative. Chapter nine discusses action principles, such as the principle of least action and the principle of least curvature. Chapters ten through twelve, the final three chapters of part one, discuss Hamiltonian dynamics at length. Chapter thirteen begins part two and focuses on the applications of the material in part one to the three-body problem, where he introduces both the general problem and several restricted examples. Chapter fourteen includes a proof of Brun's theorem and a similar proof of a theorem by Henri Poincaré on "the non-existence of a certain type of integrals in the problem of three bodies". Chapter fifteen, The General Theory of Orbits, describes two-dimensional mechanics of a particle subject to conservative forces and discusses special-case solutions of the Three-body problem. The last chapter includes discussions of solutions of the problems of previous chapters by integration of series, particularly trigonometric series. == Reception == Receiving generally positive reviews throughout, the book has gone through four editions, each with multiple reviews. A reviewer of the first edition noted that the book contains "the outlines of a long series of researches for which hitherto it has been necessary to consult English, French, German, and Italian transactions". One of those first edition reviews, by George H. Bryan in 1905, began a controversy among Cambridge University professors related to the use of Cambridge Tripos problems in textbooks. In 1980, Herbert Goldstein mentioned the book in his famous textbook Classical Mechanics where he noted that it was outdated, but remained a useful reference for some specialised topics. While it is a historic textbook on the subject, presenting what was the state-of-the-art at the time of publication, a 2014 "biography" of the book's development pointed out that the book remains influential for more than historical purposes. === First edition === The first edition of the book received several reviews, including George H. Bryan in 1905 and Edwin Bidwell Wilson in 1906, as well as German reviews by Gustav Herglotz, also in 1906 and Emil Lampe in 1918. Lampe called the treatise an "excellent work" and states that Cambridge's treatment of analytical dynamics "has had, as a consequence, that the English student is directed with great energy towards the study of mechanics in which he displays excellent performance, as can be gauged from the many, and not at all easy, problems appended at the end of each chapter of this book." Bryan's initial book review, published in 1905, was a review of three books published by the Cambridge University Press at around the same time. Bryan opens the review by writing that, though he is not does not care for the "University Presses competing with private firms", he believes "there can only be one opinion as to the series of standard treatises on higher mathematics emanating at the present time from Cambridge". He then noted that England's "lack of national interest in higher scientific research, particularly mathematical research, stands far behind most other important civilised countries" and thus it was necessary for the "University Press to publish advanced mathematical works." He went on to write: "We may take it as certain that the present volumes will be keenly read in Germany and America, and will be taken as proofs that England contains good mathematicians." Bryan criticised the chapter four, The Soluble Problems of Analytical Dynamics, for "mostly [representing] things which have no existence". Sparking a controversy published under the title "Fictitious Problems in Mathematics", Bryan goes on to write: "It is impossible for a particle to move on a smooth curve or surface because, in the first place, there is no such thing as a particle, and in the second place there is no such thing as a smooth curve or surface." Bryan went on to write that the book is "essentially mathematical and advanced" and "written mainly for the advanced mathematician". Wilson's review was published in 1906 and began with an expression of distaste for the "imminent encroachment by pure mathematics of territory that traditionally belonged to applied mathematics", but then quickly states that at that time "there seems no immediate danger" as three recent books published by the Cambridge University Press were "highly important volumes" that "exhibit great mathematical power and attainments directed firmly and unerringly along the direction of physical research". Noting the novelty of many of the sections in the book, Wilson wrote that the book "breaks the barricade and opens the way to fruitful advance". He then noted that the book is advanced and, though it is self-contained, it is not for a beginning student. He elaborated by writing that "the book is mathematical in nature, written with a precision and developed with a logic sure to appeal to mathematicians" and the "diversity of method taken with the compact style makes the book hard reading for any but the somewhat advanced student". Wilson also expressed a desire to have topics such as statistical mechanics added to the textbook. === Fictitious Problems in Mathematics === The review George H. Bryan published in Nature on 27 April 1905 sparked controversy among Cambridge professors at the time. The review received several notable responses from Whittaker's colleagues, although Whittaker himself never publicly spoke of it. The main actors in the polemic, other than Whittaker and Bryan, are an anonymous professor referred to only as "An Old Average College Don", Alfred Barnard Basset, Edward Routh, and Charles Baron Clarke. The controversy revolved around Bryan's claim that many of the problems included in the book are "fictitious", similar to those used in the Cambridge Tripos examinations. Of particular contention was Bryan's statement that a "perfectly rough body placed on a perfectly smooth surface forms as interesting a subject for speculation as the well-known irresistible body meeting the impenetrable obstacle" and that "[w]hat the average college don forgets is that roughness or smoothness are matters which concern two surfaces, not one body". The controversy stretched from 18 May to 22 June with letters on the dispute published in five issues of Nature. A reviewer later wrote that "100 years after they were written, it is difficult not to view the whole polemic as prompted by a bout of hair-splitting on the part of Bryan", though it was acknowledged that Bryan's original claim was "undoubtedly correct" and the "polemic" was likely a misunderstanding. The 18 May issue of Nature contained two letters starting the controversy, the first was an anonymous response under the title "Fictitious Problems in Mathematics" from an author referring to themself only as An Old Average College Don, while the second was a response from Brayan under the same title. The old college don charged Bryan to point to a page number where such problems are used, while Bryan responded by saying that the problems are ubiquitous and finding the places where the correct definition is used is easier than pointing out all the places where it is wrong. In the 25 May issue of Nature, Alfred Barnard Basset and Edward Routh joined the debate. Routh explained that when "bodies are said to be perfectly rough, it is usually meant that they are so rough that the amount of friction necessary to prevent sliding in the given circumstances can certainly be called into play" and states that the statements are abbreviations meant to "make the question concise". In a similar tone, Basset wrote that the wording is used to designate "an ideal state of matter". The 1 June issue of Nature contained a response from Charles Baron Clarke and another rebuttal Bryan. Charles Baron Clarke insinuates that he is the "Old Average College Don" that wrote the first anonymous letter, and again emphasises his original complaint. The final two letters of the controversy were published by Routh and Bryan on the eighth and twenty-second of June, respectively. === Second and third editions === The second and third editions received several reviews, including another one from George H. Bryan as well as Philip Jourdain, George David Birkhoff, and Thomas MacFarland Cherry. Jourdain published two similar reviews of the second edition in different journals, both in 1917. The more detailed of the two, published in The Mathematical Gazette, summarises the book's topics before making several criticisms of specific parts of the book, including the "neglect of work published from 1904 to 1908" on research over Hamilton's principle and the principle of least action. After listing several other problems, Jourdain ends the review by stating that "all these criticisms do not touch the very great value of the book which has been and will be the chief path by which students in English speaking countries have been and will be introduced to modern work on the general and special problems of dynamics." Bryan also reviewed the second edition of the book in 1918 in which he criticises the book for not including the dynamics of aeroplanes, a lapse Bryan believes was acceptable for the first but not for the second edition of the book. After discussing more about aeroplanes and the development of their dynamics, Bryan closes the review by stating that the book "will be found of much use by such students of a future generation as are able to find time to extend their study of particle and rigid dynamics outside the requirements of aerial navigation" and that it would serve as "a valuable source of information for those who are in search of new material of a theoretical character which they can take over and apply to any particular class of investigation." George David Birkhoff wrote a review in 1920 stating that the book is "invaluable as a condensed and suggestive presentation of the formal side of analytical dynamics". Birkhoff also includes several criticisms of the book, including stating it was incomplete in some respects, pointing to the methods used in chapter sixteen on trigonometric series. The third edition, published in 1927, was reviewed by Thomas MacFarland Cherry, among others. Cherry's 1928 review stated that the book "has long been recognized as the standard advanced textbook in this subject". Concerning the newly rewritten chapter fifteen the general theory of orbits, he wrote that for the most part "the account given is illustrative and introductory in nature, and from this point of view it is excellent and is a great improvement on the previous edition", but that overall "the chapter hardly lives up to its title." On chapter sixteen, also newly rewritten, he commented further that in treating the formal solutions for Hamiltonian systems using trigonometric series, the third edition replaced the method used in previous editions with a new one published by Whittaker in 1916 which Cherry states "must be regarded as suggestive rather than conclusive", noting that not all applicable proofs are included. He finishes by saying that the "optimistic view" the book takes toward the convergence of trigonometric series can be criticised, closing his review by saying "though the question is a difficult one, all the evidence suggests that the series are generally divergent and only exceptionally convergent." Another reviewer expressed regret that the work of George David Birkhoff was not included in the third edition. === Fourth edition === The final edition of the book, published in 1937, has received several reviews, including a 1990 review in German by Rüdiger Thiele. Another reviewer of the final edition noted that the discussion of the three-body problem is brief and advanced such that it "will be difficult reading for one not already acquainted with the subject" and that the references to then-recent American articles were incomplete, pointing to specific examples relating to the stability of the equilateral triangle positions for three finite masses. The same reviewer then argued that "this does not detract from the merit of the text, which this reviewer regards as the best in its field in the English language." Another reviewer in 1938 claims that the attainment of a fourth edition "shows that it has become the standard work on the topics with which it deals." According to Victor Lenzen in 1952, the book was "still the best exposition of the subject on the highest possible level". In the second edition of his Classical Mechanics, published in 1980, Herbert Goldstein wrote that this was a comprehensive, albeit outdated, treatment of analytical mechanics with discussions of topics and side notes rarely found elsewhere, such as the examination of central forces are soluble in terms of elliptic functions. However, he criticised the book for having no diagrams, which harmed the sections on topics such as the Euler angles, tendency to make things more complicated than necessary, refusal to use vector notation, and "pedantic" problems of the kind found on the Cambridge Tripos examination. Despite the book's problems and its need to be updated, he went on to write: "It remains, however, a practically unique source for the discussion of many specialized topics." === Influence === The book quickly became a classic textbook in its subject and is said to have "remarkable longevity", having remained in print almost continuously since its initial release over a hundred years ago. While it is a historic textbook on the subject, presenting what was the state-of-the-art at the time of publication, it was noted in a 2014 "biography" of the book's development that it is not "used merely as a historical document", highlighting that only three of 114 books and papers that cited the textbook between 2000 and 2012 were historical in nature. In that time, a 2006 engineering textbook Principles of Engineering Mechanics, stated that the book is "highly recommended to advanced readers" and was said to remain "one of the best mathematical treatments of analytical dynamics". In a 2015 article on modern dynamics, Miguel Ángel Fernández Sanjuán wrote: "When we think about textbooks used for the teaching of mechanics in the last century, we may think on the book A Treatise on the Analytical Dynamics of Particles and Rigid Bodies" as well as Principles of Mechanics by John L. Synge and Byron A. Griffith, and Classical Mechanics by Herbert Goldstein. During the 1910s, Albert Einstein was working on his general theory of relativity when he contacted Constantin Carathéodory asking for clarifications on the Hamilton–Jacobi equation and canonical transformations. He wanted to see a satisfactory derivation of the former and the origins of the latter. Carathéodory explained some fundamental details of the canonical transformations and referred Einstein to E. T. Whittaker's Analytical Dynamics. Einstein was trying to solve the problem of "closed time-lines" or the geodesics corresponding to the closed trajectory of light and free particles in a static universe, which he introduced in 1917. Paul Dirac, a pioneer of quantum mechanics, is said to be "indebted" to the book, as it contained the only material he could find on Poisson brackets, which he needed to finish his work on quantum mechanics in the 1920s. In September 1925, Dirac received proofs of a seminal paper by Werner Heisenberg on the new physics. Soon he realised that the key idea in Heisenberg's paper was the anti-commutativity of dynamical variables and remembered that the analogous mathematical construction in classical mechanics was Poisson brackets. In a 1980 review of other works, Ian Sneddon stated that the "theoretical work of the century and more after the death of Lagrange was crystallized by E. T. Whittaker in a treatise Whittaker (1904) which has not been superseded as the definitive account of classical mechanics". In another 1980 review of other works, Shlomo Sternberg states that the books reviewed "should be on the shelf of every serious student of mechanics. One would like to be able to report that such a collection would be complete. Unfortunately, this is not so. There exist topics in the classical repertoire, such as Kowalewskaya's top which are not covered by any of these books. So hold on to your copy of Whittaker (1904)". == Publication history == The treatise has remained in print for more than a hundred years, with four editions, a 1989 reprint with a new foreword by William Hunter McCrea, and translations in German and Russian. === Original editions === The original four editions of textbook were published in Great Britain by the Cambridge University Press in 1904, 1917, 1927, and 1937. Whittaker, E. T. (1904). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (1st ed.). Cambridge: Cambridge University Press. OCLC 1110228082. Whittaker, E. T. (1917). A treatise on the analytical dynamics of particles and rigid bodies; with an introduction to the problem of three bodies (2nd ed.). Cambridge: Cambridge University Press. OCLC 352133. Whittaker, E. T (1927). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (3rd ed.). Cambridge: Cambridge University Press. OCLC 1020880124. Whittaker, E. T (1937). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. OCLC 959757497. === Reprints and international editions === In addition to the four editions and the reprints which have kept the book in circulation in the English language for the past hundred years, the book has a German edition that was printed in 1924 that was based on the book's second edition as well as a Russian edition that was printed in 1999. A 1989 reprint of the fourth edition in English with a new foreword by William Hunter McCrea was published in 1989. Whittaker, E. T.; Mittelsten, F.; Mittelsten, K. (1924). Analytische Dynamik der Punkte und Starren Körper: Mit Einer Einführung in das Dreikörperproblem und mit Zahlreichen Übungsaufgaben. Grundlehren der mathematischen Wissenschaften (in German). Berlin Heidelberg: Springer-Verlag. ISBN 978-3-662-24567-5. {{cite book}}: ISBN / Date incompatibility (help) Whittaker, E. T (1937). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (in Spanish) (4th ed.). Cambridge: Cambridge University Press. OCLC 1123785221. Whittaker, E. T. (1988). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. ISBN 0-521-35883-3. OCLC 264423700. Whittaker, E. T. (1988). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511608797. ISBN 978-0-511-60879-7. OCLC 967696618. (online) Whittaker, E. T. (1999). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies. McCrea, W. H. (foreword) (4th ed.). Cambridge: Cambridge University Press. ISBN 978-1-316-04314-1. OCLC 1100677089. Уиттекер, Э. (2004). Аналитическая динамика (in Russian). Russia: Editorial URSS. ISBN 5-354-00849-2. == See also == Bibliography of E. T. Whittaker Classical Mechanics a textbook on similar topics by Herbert Goldstein List of textbooks on classical mechanics and quantum mechanics == References == == Further reading == == External links == Full text of A treatise on the analytical dynamics of particles and rigid bodies (3rd edition) at the Internet Archive Whittaker, E. T.; McCrae, Sir William (1988). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. Cambridge University Press. doi:10.1017/CBO9780511608797. ISBN 9780521358835. Retrieved 9 November 2020.
Wikipedia/A_Treatise_on_the_Analytical_Dynamics_of_Particles_and_Rigid_Bodies
Structure and Interpretation of Classical Mechanics (SICM) is a classical mechanics textbook written by Gerald Jay Sussman and Jack Wisdom with Meinhard E. Mayer. The first edition was published by MIT Press in 2001, and a second edition was released in 2015. The book is used at the Massachusetts Institute of Technology to teach a class in advanced classical mechanics, starting with Lagrange's equations and proceeding through canonical perturbation theory. SICM explains some physical phenomena by showing computer programs for simulating them. These programs are written in the Scheme programming language, as were the programs in Sussman's earlier computer science textbook, Structure and Interpretation of Computer Programs. Sussman wrote: Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without the real understanding. To address this problem Jack Wisdom and I, with help from Hardy Mayer, have written [Structure and Interpretation of Classical Mechanics] and are teaching a class at MIT that uses computational techniques to communicate a deeper understanding of Classical mechanics. We use computational algorithms to express the methods used to analyze dynamical phenomena. Expressing the methods in a computer language forces them to be unambiguous and computationally effective. Formulating a method as a computer-executable program and debugging that program is a powerful exercise in the learning process. Also, once formalized procedurally, a mathematical idea becomes a tool that can be used directly to compute results. The entire text is freely available online from the publisher's website. == Editions == Sussman, Gerald Jay; Wisdom, Jack; Mayer, Meinhard E. (2001). Structure and Interpretation of Classical Mechanics. Cambridge, Massachusetts: MIT Press. ISBN 0262194554. OCLC 45223598. Sussman, Gerald Jay; Wisdom, Jack (6 February 2015). Structure and interpretation of classical mechanics (Second ed.). Cambridge, Massachusetts: MIT Press. ISBN 9780262028967. OCLC 905916340. == References == == External links == Publisher page with open access link on publisher site; direct link to HTML from there. Full text in HTML (first edition) on co-author Gerald Sussman's site OCW MIT OpenCourseWare class materials for course 6.946, Fall 2008.
Wikipedia/Structure_and_Interpretation_of_Classical_Mechanics
The classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity and can thereby move due to the conservation of momentum. It is credited to Konstantin Tsiolkovsky, who independently derived it and published it in 1903, although it had been independently derived and published by William Moore in 1810, and later published in a separate book in 1813. Robert Goddard also developed it independently in 1912, and Hermann Oberth derived it independently about 1920. The maximum change of velocity of the vehicle, Δ v {\displaystyle \Delta v} (with no external forces acting) is: Δ v = v e ln ⁡ m 0 m f = I sp g 0 ln ⁡ m 0 m f , {\displaystyle \Delta v=v_{\text{e}}\ln {\frac {m_{0}}{m_{f}}}=I_{\text{sp}}g_{0}\ln {\frac {m_{0}}{m_{f}}},} where: v e {\displaystyle v_{\text{e}}} is the effective exhaust velocity; I sp {\displaystyle I_{\text{sp}}} is the specific impulse in dimension of time; g 0 {\displaystyle g_{0}} is standard gravity; ln {\displaystyle \ln } is the natural logarithm function; m 0 {\displaystyle m_{0}} is the initial total mass, including propellant, a.k.a. wet mass; m f {\displaystyle m_{f}} is the final total mass without propellant, a.k.a. dry mass. Given the effective exhaust velocity determined by the rocket motor's design, the desired delta-v (e.g., orbital speed or escape velocity), and a given dry mass m f {\displaystyle m_{f}} , the equation can be solved for the required wet mass m 0 {\displaystyle m_{0}} : m 0 = m f e Δ v / v e . {\displaystyle m_{0}=m_{f}e^{\Delta v/v_{\text{e}}}.} The required propellant mass is then m 0 − m f = m f ( e Δ v / v e − 1 ) {\displaystyle m_{0}-m_{f}=m_{f}(e^{\Delta v/v_{\text{e}}}-1)} The necessary wet mass grows exponentially with the desired delta-v. == History == The equation is named after Russian scientist Konstantin Tsiolkovsky who independently derived it and published it in his 1903 work. The equation had been derived earlier by the British mathematician William Moore in 1810, and later published in a separate book in 1813. American Robert Goddard independently developed the equation in 1912 when he began his research to improve rocket engines for possible space flight. German engineer Hermann Oberth independently derived the equation about 1920 as he studied the feasibility of space travel. While the derivation of the rocket equation is a straightforward calculus exercise, Tsiolkovsky is honored as being the first to apply it to the question of whether rockets could achieve speeds necessary for space travel. == Experiment of the boat == In order to understand the principle of rocket propulsion, Konstantin Tsiolkovsky proposed the famous experiment of "the boat". A person is in a boat away from the shore without oars. They want to reach this shore. They notice that the boat is loaded with a certain quantity of stones and have the idea of quickly and repeatedly throwing the stones in succession in the opposite direction. Effectively, the quantity of movement of the stones thrown in one direction corresponds to an equal quantity of movement for the boat in the other direction (ignoring friction / drag). == Derivation == === Most popular derivation === Consider the following system: In the following derivation, "the rocket" is taken to mean "the rocket and all of its unexpended propellant". Newton's second law of motion relates external forces ( F → i {\displaystyle {\vec {F}}_{i}} ) to the change in linear momentum of the whole system (including rocket and exhaust) as follows: ∑ i F → i = lim Δ t → 0 P → Δ t − P → 0 Δ t {\displaystyle \sum _{i}{\vec {F}}_{i}=\lim _{\Delta t\to 0}{\frac {{\vec {P}}_{\Delta t}-{\vec {P}}_{0}}{\Delta t}}} where P → 0 {\displaystyle {\vec {P}}_{0}} is the momentum of the rocket at time t = 0 {\displaystyle t=0} : P → 0 = m V → {\displaystyle {\vec {P}}_{0}=m{\vec {V}}} and P → Δ t {\displaystyle {\vec {P}}_{\Delta t}} is the momentum of the rocket and exhausted mass at time t = Δ t {\displaystyle t=\Delta t} : P → Δ t = ( m − Δ m ) ( V → + Δ V → ) + Δ m V → e {\displaystyle {\vec {P}}_{\Delta t}=\left(m-\Delta m\right)\left({\vec {V}}+\Delta {\vec {V}}\right)+\Delta m{\vec {V}}_{\text{e}}} and where, with respect to the observer: V → {\displaystyle {\vec {V}}} is the velocity of the rocket at time t = 0 {\displaystyle t=0} V → + Δ V → {\displaystyle {\vec {V}}+\Delta {\vec {V}}} is the velocity of the rocket at time t = Δ t {\displaystyle t=\Delta t} V → e {\displaystyle {\vec {V}}_{\text{e}}} is the velocity of the mass added to the exhaust (and lost by the rocket) during time Δ t {\displaystyle \Delta t} m {\displaystyle m} is the mass of the rocket at time t = 0 {\displaystyle t=0} ( m − Δ m ) {\displaystyle \left(m-\Delta m\right)} is the mass of the rocket at time t = Δ t {\displaystyle t=\Delta t} The velocity of the exhaust V → e {\displaystyle {\vec {V}}_{\text{e}}} in the observer frame is related to the velocity of the exhaust in the rocket frame v e {\displaystyle v_{\text{e}}} by: v → e = V → e − V → {\displaystyle {\vec {v}}_{\text{e}}={\vec {V}}_{\text{e}}-{\vec {V}}} thus, V → e = V → + v → e {\displaystyle {\vec {V}}_{\text{e}}={\vec {V}}+{\vec {v}}_{\text{e}}} Solving this yields: P → Δ t − P → 0 = m Δ V → + v → e Δ m − Δ m Δ V → {\displaystyle {\vec {P}}_{\Delta t}-{\vec {P}}_{0}=m\Delta {\vec {V}}+{\vec {v}}_{\text{e}}\Delta m-\Delta m\Delta {\vec {V}}} If V → {\displaystyle {\vec {V}}} and v → e {\displaystyle {\vec {v}}_{\text{e}}} are opposite, F → i {\displaystyle {\vec {F}}_{\text{i}}} have the same direction as V → {\displaystyle {\vec {V}}} , Δ m Δ V → {\displaystyle \Delta m\Delta {\vec {V}}} are negligible (since d m d v → → 0 {\displaystyle dm\,d{\vec {v}}\to 0} ), and using d m = − Δ m {\displaystyle dm=-\Delta m} (since ejecting a positive Δ m {\displaystyle \Delta m} results in a decrease in rocket mass in time), ∑ i F i = m d V d t + v e d m d t {\displaystyle \sum _{i}F_{i}=m{\frac {dV}{dt}}+v_{\text{e}}{\frac {dm}{dt}}} If there are no external forces then ∑ i F i = 0 {\textstyle \sum _{i}F_{i}=0} (conservation of linear momentum) and − m d V d t = v e d m d t {\displaystyle -m{\frac {dV}{dt}}=v_{\text{e}}{\frac {dm}{dt}}} Assuming that v e {\displaystyle v_{\text{e}}} is constant (known as Tsiolkovsky's hypothesis), so it is not subject to integration, then the above equation may be integrated as follows: − ∫ V V + Δ V d V = v e ∫ m 0 m f d m m {\displaystyle -\int _{V}^{V+\Delta V}\,dV={v_{e}}\int _{m_{0}}^{m_{f}}{\frac {dm}{m}}} This then yields Δ V = v e ln ⁡ m 0 m f {\displaystyle \Delta V=v_{\text{e}}\ln {\frac {m_{0}}{m_{f}}}} or equivalently m f = m 0 e − Δ V / v e {\displaystyle m_{f}=m_{0}e^{-\Delta V\ /v_{\text{e}}}} or m 0 = m f e Δ V / v e {\displaystyle m_{0}=m_{f}e^{\Delta V/v_{\text{e}}}} or m 0 − m f = m f ( e Δ V / v e − 1 ) {\displaystyle m_{0}-m_{f}=m_{f}\left(e^{\Delta V/v_{\text{e}}}-1\right)} where m 0 {\displaystyle m_{0}} is the initial total mass including propellant, m f {\displaystyle m_{f}} the final mass, and v e {\displaystyle v_{\text{e}}} the velocity of the rocket exhaust with respect to the rocket (the specific impulse, or, if measured in time, that multiplied by gravity-on-Earth acceleration). If v e {\displaystyle v_{\text{e}}} is NOT constant, we might not have rocket equations that are as simple as the above forms. Many rocket dynamics researches were based on the Tsiolkovsky's constant v e {\displaystyle v_{\text{e}}} hypothesis. The value m 0 − m f {\displaystyle m_{0}-m_{f}} is the total working mass of propellant expended. Δ V {\displaystyle \Delta V} (delta-v) is the integration over time of the magnitude of the acceleration produced by using the rocket engine (what would be the actual acceleration if external forces were absent). In free space, for the case of acceleration in the direction of the velocity, this is the increase of the speed. In the case of an acceleration in opposite direction (deceleration) it is the decrease of the speed. Of course gravity and drag also accelerate the vehicle, and they can add or subtract to the change in velocity experienced by the vehicle. Hence delta-v may not always be the actual change in speed or velocity of the vehicle. === Other derivations === ==== Impulse-based ==== The equation can also be derived from the basic integral of acceleration in the form of force (thrust) over mass. By representing the delta-v equation as the following: Δ v = ∫ t 0 t f | T | m 0 − t Δ m d t {\displaystyle \Delta v=\int _{t_{0}}^{t_{f}}{\frac {|T|}{{m_{0}}-{t}\Delta {m}}}~dt} where T is thrust, m 0 {\displaystyle m_{0}} is the initial (wet) mass and Δ m {\displaystyle \Delta m} is the initial mass minus the final (dry) mass, and realising that the integral of a resultant force over time is total impulse, assuming thrust is the only force involved, ∫ t 0 t f F d t = J {\displaystyle \int _{t_{0}}^{t_{f}}F~dt=J} The integral is found to be: J ln ⁡ ( m 0 ) − ln ⁡ ( m f ) Δ m {\displaystyle J~{\frac {\ln({m_{0}})-\ln({m_{f}})}{\Delta m}}} Realising that impulse over the change in mass is equivalent to force over propellant mass flow rate (p), which is itself equivalent to exhaust velocity, J Δ m = F p = V exh {\displaystyle {\frac {J}{\Delta m}}={\frac {F}{p}}=V_{\text{exh}}} the integral can be equated to Δ v = V exh ln ⁡ ( m 0 m f ) {\displaystyle \Delta v=V_{\text{exh}}~\ln \left({\frac {m_{0}}{m_{f}}}\right)} ==== Acceleration-based ==== Imagine a rocket at rest in space with no forces exerted on it (Newton's first law of motion). From the moment its engine is started (clock set to 0) the rocket expels gas mass at a constant mass flow rate R (kg/s) and at exhaust velocity relative to the rocket ve (m/s). This creates a constant force F propelling the rocket that is equal to R × ve. The rocket is subject to a constant force, but its total mass is decreasing steadily because it is expelling gas. According to Newton's second law of motion, its acceleration at any time t is its propelling force F divided by its current mass m: a = d v d t = − F m ( t ) = − R v e m ( t ) {\displaystyle ~a={\frac {dv}{dt}}=-{\frac {F}{m(t)}}=-{\frac {Rv_{\text{e}}}{m(t)}}} Now, the mass of fuel the rocket initially has on board is equal to m0 – mf. For the constant mass flow rate R it will therefore take a time T = (m0 – mf)/R to burn all this fuel. Integrating both sides of the equation with respect to time from 0 to T (and noting that R = dm/dt allows a substitution on the right) obtains: Δ v = v f − v 0 = − v e [ ln ⁡ m f − ln ⁡ m 0 ] = v e ln ⁡ ( m 0 m f ) . {\displaystyle ~\Delta v=v_{f}-v_{0}=-v_{\text{e}}\left[\ln m_{f}-\ln m_{0}\right]=~v_{\text{e}}\ln \left({\frac {m_{0}}{m_{f}}}\right).} ==== Limit of finite mass "pellet" expulsion ==== The rocket equation can also be derived as the limiting case of the speed change for a rocket that expels its fuel in the form of N {\displaystyle N} pellets consecutively, as N → ∞ {\displaystyle N\to \infty } , with an effective exhaust speed v eff {\displaystyle v_{\text{eff}}} such that the mechanical energy gained per unit fuel mass is given by 1 2 v eff 2 {\textstyle {\tfrac {1}{2}}v_{\text{eff}}^{2}} . In the rocket's center-of-mass frame, if a pellet of mass m p {\displaystyle m_{p}} is ejected at speed u {\displaystyle u} and the remaining mass of the rocket is m {\displaystyle m} , the amount of energy converted to increase the rocket's and pellet's kinetic energy is 1 2 m p v eff 2 = 1 2 m p u 2 + 1 2 m ( Δ v ) 2 . {\displaystyle {\tfrac {1}{2}}m_{p}v_{\text{eff}}^{2}={\tfrac {1}{2}}m_{p}u^{2}+{\tfrac {1}{2}}m(\Delta v)^{2}.} Using momentum conservation in the rocket's frame just prior to ejection, u = Δ v m m p {\textstyle u=\Delta v{\tfrac {m}{m_{p}}}} , from which we find Δ v = v eff m p m ( m + m p ) . {\displaystyle \Delta v=v_{\text{eff}}{\frac {m_{p}}{\sqrt {m(m+m_{p})}}}.} Let ϕ {\displaystyle \phi } be the initial fuel mass fraction on board and m 0 {\displaystyle m_{0}} the initial fueled-up mass of the rocket. Divide the total mass of fuel ϕ m 0 {\displaystyle \phi m_{0}} into N {\displaystyle N} discrete pellets each of mass m p = ϕ m 0 / N {\displaystyle m_{p}=\phi m_{0}/N} . The remaining mass of the rocket after ejecting j {\displaystyle j} pellets is then m = m 0 ( 1 − j ϕ / N ) {\displaystyle m=m_{0}(1-j\phi /N)} . The overall speed change after ejecting j {\displaystyle j} pellets is the sum Δ v = v eff ∑ j = 1 j = N ϕ / N ( 1 − j ϕ / N ) ( 1 − j ϕ / N + ϕ / N ) {\displaystyle \Delta v=v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\phi /N}{\sqrt {(1-j\phi /N)(1-j\phi /N+\phi /N)}}}} Notice that for large N {\displaystyle N} the last term in the denominator ϕ / N ≪ 1 {\displaystyle \phi /N\ll 1} and can be neglected to give Δ v ≈ v eff ∑ j = 1 j = N ϕ / N 1 − j ϕ / N = v eff ∑ j = 1 j = N Δ x 1 − x j {\displaystyle \Delta v\approx v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\phi /N}{1-j\phi /N}}=v_{\text{eff}}\sum _{j=1}^{j=N}{\frac {\Delta x}{1-x_{j}}}} where Δ x = ϕ N {\textstyle \Delta x={\frac {\phi }{N}}} and x j = j ϕ N {\textstyle x_{j}={\frac {j\phi }{N}}} . As N → ∞ {\displaystyle N\rightarrow \infty } this Riemann sum becomes the definite integral lim N → ∞ Δ v = v eff ∫ 0 ϕ d x 1 − x = v eff ln ⁡ 1 1 − ϕ = v eff ln ⁡ m 0 m f , {\displaystyle \lim _{N\to \infty }\Delta v=v_{\text{eff}}\int _{0}^{\phi }{\frac {dx}{1-x}}=v_{\text{eff}}\ln {\frac {1}{1-\phi }}=v_{\text{eff}}\ln {\frac {m_{0}}{m_{f}}},} since the final remaining mass of the rocket is m f = m 0 ( 1 − ϕ ) {\displaystyle m_{f}=m_{0}(1-\phi )} . === Special relativity === If special relativity is taken into account, the following equation can be derived for a relativistic rocket, with Δ v {\displaystyle \Delta v} again standing for the rocket's final velocity (after expelling all its reaction mass and being reduced to a rest mass of m 1 {\displaystyle m_{1}} ) in the inertial frame of reference where the rocket started at rest (with the rest mass including fuel being m 0 {\displaystyle m_{0}} initially), and c {\displaystyle c} standing for the speed of light in vacuum: m 0 m 1 = [ 1 + Δ v c 1 − Δ v c ] c 2 v e {\displaystyle {\frac {m_{0}}{m_{1}}}=\left[{\frac {1+{\frac {\Delta v}{c}}}{1-{\frac {\Delta v}{c}}}}\right]^{\frac {c}{2v_{\text{e}}}}} Writing m 0 m 1 {\textstyle {\frac {m_{0}}{m_{1}}}} as R {\displaystyle R} allows this equation to be rearranged as Δ v c = R 2 v e c − 1 R 2 v e c + 1 {\displaystyle {\frac {\Delta v}{c}}={\frac {R^{\frac {2v_{\text{e}}}{c}}-1}{R^{\frac {2v_{\text{e}}}{c}}+1}}} Then, using the identity R 2 v e c = exp ⁡ [ 2 v e c ln ⁡ R ] {\textstyle R^{\frac {2v_{\text{e}}}{c}}=\exp \left[{\frac {2v_{\text{e}}}{c}}\ln R\right]} (here "exp" denotes the exponential function; see also Natural logarithm as well as the "power" identity at logarithmic identities) and the identity tanh ⁡ x = e 2 x − 1 e 2 x + 1 {\textstyle \tanh x={\frac {e^{2x}-1}{e^{2x}+1}}} (see Hyperbolic function), this is equivalent to Δ v = c tanh ⁡ ( v e c ln ⁡ m 0 m 1 ) {\displaystyle \Delta v=c\tanh \left({\frac {v_{\text{e}}}{c}}\ln {\frac {m_{0}}{m_{1}}}\right)} == Terms of the equation == === Delta-v === Delta-v (literally "change in velocity"), symbolised as Δv and pronounced delta-vee, as used in spacecraft flight dynamics, is a measure of the impulse that is needed to perform a maneuver such as launching from, or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of the vehicle. Delta-v is produced by reaction engines, such as rocket engines, is proportional to the thrust per unit mass and burn time, and is used to determine the mass of propellant required for the given manoeuvre through the rocket equation. For multiple manoeuvres, delta-v sums linearly. For interplanetary missions delta-v is often plotted on a porkchop plot which displays the required mission delta-v as a function of launch date. === Mass fraction === In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload. === Effective exhaust velocity === The effective exhaust velocity is often specified as a specific impulse and they are related to each other by: v e = g 0 I sp , {\displaystyle v_{\text{e}}=g_{0}I_{\text{sp}},} where I sp {\displaystyle I_{\text{sp}}} is the specific impulse in seconds, v e {\displaystyle v_{\text{e}}} is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2), g 0 {\displaystyle g_{0}} is the standard gravity, 9.80665 m/s2 (in Imperial units 32.174 ft/s2). == Applicability == The rocket equation captures the essentials of rocket flight physics in a single short equation. It also holds true for rocket-like reaction vehicles whenever the effective exhaust velocity is constant, and can be summed or integrated when the effective exhaust velocity varies. The rocket equation only accounts for the reaction force from the rocket engine; it does not include other forces that may act on a rocket, such as aerodynamic or gravitational forces. As such, when using it to calculate the propellant requirement for launch from (or powered descent to) a planet with an atmosphere, the effects of these forces must be included in the delta-V requirement (see Examples below). In what has been called "the tyranny of the rocket equation", there is a limit to the amount of payload that the rocket can carry, as higher amounts of propellant increment the overall weight, and thus also increase the fuel consumption. The equation does not apply to non-rocket systems such as aerobraking, gun launches, space elevators, launch loops, tether propulsion or light sails. The rocket equation can be applied to orbital maneuvers in order to determine how much propellant is needed to change to a particular new orbit, or to find the new orbit as the result of a particular propellant burn. When applying to orbital maneuvers, one assumes an impulsive maneuver, in which the propellant is discharged and delta-v applied instantaneously. This assumption is relatively accurate for short-duration burns such as for mid-course corrections and orbital insertion maneuvers. As the burn duration increases, the result is less accurate due to the effect of gravity on the vehicle over the duration of the maneuver. For low-thrust, long duration propulsion, such as electric propulsion, more complicated analysis based on the propagation of the spacecraft's state vector and the integration of thrust are used to predict orbital motion. == Examples == Assume an exhaust velocity of 4,500 meters per second (15,000 ft/s) and a Δ v {\displaystyle \Delta v} of 9,700 meters per second (32,000 ft/s) (Earth to LEO, including Δ v {\displaystyle \Delta v} to overcome gravity and aerodynamic drag). Single-stage-to-orbit rocket: 1 − e − 9.7 / 4.5 {\displaystyle 1-e^{-9.7/4.5}} = 0.884, therefore 88.4% of the initial total mass has to be propellant. The remaining 11.6% is for the engines, the tank, and the payload. Two-stage-to-orbit: suppose that the first stage should provide a Δ v {\displaystyle \Delta v} of 5,000 meters per second (16,000 ft/s); 1 − e − 5.0 / 4.5 {\displaystyle 1-e^{-5.0/4.5}} = 0.671, therefore 67.1% of the initial total mass has to be propellant to the first stage. The remaining mass is 32.9%. After disposing of the first stage, a mass remains equal to this 32.9%, minus the mass of the tank and engines of the first stage. Assume that this is 8% of the initial total mass, then 24.9% remains. The second stage should provide a Δ v {\displaystyle \Delta v} of 4,700 meters per second (15,000 ft/s); 1 − e − 4.7 / 4.5 {\displaystyle 1-e^{-4.7/4.5}} = 0.648, therefore 64.8% of the remaining mass has to be propellant, which is 16.2% of the original total mass, and 8.7% remains for the tank and engines of the second stage, the payload, and in the case of a space shuttle, also the orbiter. Thus together 16.7% of the original launch mass is available for all engines, the tanks, and payload. == Stages == In the case of sequentially thrusting rocket stages, the equation applies for each stage, where for each stage the initial mass in the equation is the total mass of the rocket after discarding the previous stage, and the final mass in the equation is the total mass of the rocket just before discarding the stage concerned. For each stage the specific impulse may be different. For example, if 80% of the mass of a rocket is the fuel of the first stage, and 10% is the dry mass of the first stage, and 10% is the remaining rocket, then Δ v = v e ln ⁡ 100 100 − 80 = v e ln ⁡ 5 = 1.61 v e . {\displaystyle {\begin{aligned}\Delta v\ &=v_{\text{e}}\ln {100 \over 100-80}\\&=v_{\text{e}}\ln 5\\&=1.61v_{\text{e}}.\\\end{aligned}}} With three similar, subsequently smaller stages with the same v e {\displaystyle v_{\text{e}}} for each stage, gives: Δ v = 3 v e ln ⁡ 5 = 4.83 v e {\displaystyle \Delta v\ =3v_{\text{e}}\ln 5\ =4.83v_{\text{e}}} and the payload is 10% × 10% × 10% = 0.1% of the initial mass. A comparable SSTO rocket, also with a 0.1% payload, could have a mass of 11.1% for fuel tanks and engines, and 88.8% for fuel. This would give Δ v = v e ln ⁡ ( 100 / 11.2 ) = 2.19 v e . {\displaystyle \Delta v\ =v_{\text{e}}\ln(100/11.2)\ =2.19v_{\text{e}}.} If the motor of a new stage is ignited before the previous stage has been discarded and the simultaneously working motors have a different specific impulse (as is often the case with solid rocket boosters and a liquid-fuel stage), the situation is more complicated. == See also == Delta-v budget Jeep problem Mass ratio Oberth effect - applying delta-v in a gravity well increases the final velocity Relativistic rocket Reversibility of orbits Robert H. Goddard - added terms for gravity and drag in vertical flight Spacecraft propulsion Stigler’s law of eponymy == References == == External links == How to derive the rocket equation Relativity Calculator – Learn Tsiolkovsky's rocket equations Tsiolkovsky's rocket equations plot and calculator in WolframAlpha
Wikipedia/Rocket_equation
In physics, a force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. In mechanics, force makes ideas like 'pushing' or 'pulling' mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F. Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In the case of multiple forces, if the net force on an extended body is zero the body is in equilibrium. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. == Development of the concept == Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.: 2–10 : 79  High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. == Pre-Newtonian concepts == Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force). == Newtonian mechanics == Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. === First law === Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.: 1–7  === Second law === According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.: 204–207  A modern statement of Newton's second law is a vector equation: F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} where p {\displaystyle \mathbf {p} } is the momentum of the system, and F {\displaystyle \mathbf {F} } is the net (vector sum) force.: 399  If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, F = d p d t = d ( m v ) d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}={\frac {\mathrm {d} \left(m\mathbf {v} \right)}{\mathrm {d} t}},} where m is the mass and v {\displaystyle \mathbf {v} } is the velocity.: 9-1,9-2  If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes F = m d v d t . {\displaystyle \mathbf {F} =m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}.} By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} .} === Third law === Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} is the force of body 1 on body 2 and F 2 , 1 {\displaystyle \mathbf {F} _{2,1}} that of body 2 on body 1, then F 1 , 2 = − F 2 , 1 . {\displaystyle \mathbf {F} _{1,2}=-\mathbf {F} _{2,1}.} This law is sometimes referred to as the action-reaction law, with F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} called the action and − F 2 , 1 {\displaystyle -\mathbf {F} _{2,1}} the reaction. Newton's third law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: F 1 , 2 + F 2 , 1 = 0. {\displaystyle \mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.: 19-1  Combining Newton's second and third laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p 1 {\displaystyle \mathbf {p} _{1}} is the momentum of object 1 and p 2 {\displaystyle \mathbf {p} _{2}} the momentum of object 2, then d p 1 d t + d p 2 d t = F 1 , 2 + F 2 , 1 = 0. {\displaystyle {\frac {\mathrm {d} \mathbf {p} _{1}}{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {p} _{2}}{\mathrm {d} t}}=\mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.: ch.12  === Defining "force" === Some textbooks use Newton's second law as a definition of force. However, for the equation F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } for a constant mass m {\displaystyle m} to then have any predictive content, it must be combined with further information.: 12-1  Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference.: 59  The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways,: vii  which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. == Combining forces == Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.: 197  Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.: ch.12  Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.: ch.12  === Equilibrium === When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.: 566  Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.: 566  ==== Static ==== Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his three laws of motion.: ch.12  ==== Dynamic ==== Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.: ch.12  == Examples of forces in classical mechanics == Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. === Gravitational force or Gravity === What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g {\displaystyle \mathbf {g} } and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\displaystyle m} will experience a force: F = m g . {\displaystyle \mathbf {F} =m\mathbf {g} .} For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.: ch.12  Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\displaystyle m_{\oplus }} ) and the radius ( R ⊕ {\displaystyle R_{\oplus }} ) of the Earth to the gravitational acceleration: g = − G m ⊕ R ⊕ 2 r ^ , {\displaystyle \mathbf {g} =-{\frac {Gm_{\oplus }}{{R_{\oplus }}^{2}}}{\hat {\mathbf {r} }},} where the vector direction is given by r ^ {\displaystyle {\hat {\mathbf {r} }}} , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant G {\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\displaystyle m_{2}} is F = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where r {\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the Solar System until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. === Electromagnetic === The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges.: 519  The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.: 4-6–4-8  Thus the electric field anywhere in space is defined as E = F q , {\displaystyle \mathbf {E} ={\mathbf {F} \over {q}},} where q {\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\displaystyle q} due to electric and magnetic fields: F = q ( E + v × B ) , {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} where F {\displaystyle \mathbf {F} } is the electromagnetic force, E {\displaystyle \mathbf {E} } is the electric field at the body's location, B {\displaystyle \mathbf {B} } is the magnetic field, and v {\displaystyle \mathbf {v} } is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.: 482  The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. === Normal === When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects.: 264  The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.: ch.12  === Friction === Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.: 267  The static friction force ( F s f {\displaystyle \mathbf {F} _{\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\displaystyle \mu _{\mathrm {sf} }} ) multiplied by the normal force ( F N {\displaystyle \mathbf {F} _{\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality: 0 ≤ F s f ≤ μ s f F N . {\displaystyle 0\leq \mathbf {F} _{\mathrm {sf} }\leq \mu _{\mathrm {sf} }\mathbf {F} _{\mathrm {N} }.} The kinetic friction force ( F k f {\displaystyle F_{\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: F k f = μ k f F N , {\displaystyle \mathbf {F} _{\mathrm {kf} }=\mu _{\mathrm {kf} }\mathbf {F} _{\mathrm {N} },} where μ k f {\displaystyle \mu _{\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.: 267–271  === Tension === Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.: ch.12  === Spring === A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\displaystyle \Delta x} is the displacement, the force exerted by an ideal spring equals: F = − k Δ x , {\displaystyle \mathbf {F} =-k\Delta \mathbf {x} ,} where k {\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.: ch.12  === Centripetal === For an object in uniform circular motion, the net force acting on the object equals: F = − m v 2 r r ^ , {\displaystyle \mathbf {F} =-{\frac {mv^{2}}{r}}{\hat {\mathbf {r} }},} where m {\displaystyle m} is the mass of the object, v {\displaystyle v} is the velocity of the object and r {\displaystyle r} is the distance to the center of the circular path and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.: ch.12  === Continuum mechanics === Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: F V = − ∇ P , {\displaystyle {\frac {\mathbf {F} }{V}}=-\mathbf {\nabla } P,} where V {\displaystyle V} is the volume of the object in the fluid and P {\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.: ch.12  A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: F d = − b v , {\displaystyle \mathbf {F} _{\mathrm {d} }=-b\mathbf {v} ,} where: b {\displaystyle b} is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and v {\displaystyle \mathbf {v} } is the velocity of the object.: ch.12  More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as σ = F A , {\displaystyle \sigma ={\frac {F}{A}},} where A {\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.: 133–134 : 38-1–38-11  === Fictitious === There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.: ch.12  Because these forces are not genuine they are also referred to as "pseudo forces".: 12-11  In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. == Concepts derived from force == === Rotation and torque === Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F {\displaystyle \mathbf {F} } is defined relative to an arbitrary reference point as the cross product: τ = r × F , {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,} where r {\displaystyle \mathbf {r} } is the position vector of the force application point relative to the reference point.: 497  Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: τ = I α , {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},} where I {\displaystyle I} is the moment of inertia of the body α {\displaystyle {\boldsymbol {\alpha }}} is the angular acceleration of the body.: 502  This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.: 96–113  Equivalently, the differential form of Newton's second law provides an alternative definition of torque: τ = d L d t , {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {dt} }},} where L {\displaystyle \mathbf {L} } is the angular momentum of the particle. Newton's third law of motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. === Yank === The yank is defined as the rate of change of force: 131  Y = d F d t {\displaystyle \mathbf {Y} ={\frac {\mathrm {d} \mathbf {F} }{\mathrm {d} t}}} The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. === Kinematic integrals === Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}{\mathbf {F} \,\mathrm {d} t},} which by Newton's second law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:: 13-3  W = ∫ x 1 x 2 F ⋅ d x , {\displaystyle W=\int _{\mathbf {x} _{1}}^{\mathbf {x} _{2}}{\mathbf {F} \cdot {\mathrm {d} \mathbf {x} }},} which is equivalent to changes in kinetic energy (yielding the work energy theorem).: 13-3  Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x {\displaystyle d\mathbf {x} } in a time interval dt:: 13-2  d W = d W d x ⋅ d x = F ⋅ d x , {\displaystyle \mathrm {d} W={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot \mathrm {d} \mathbf {x} =\mathbf {F} \cdot \mathrm {d} \mathbf {x} ,} so P = d W d t = d W d x ⋅ d x d t = F ⋅ v , {\displaystyle P={\frac {\mathrm {d} W}{\mathrm {d} t}}={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}}=\mathbf {F} \cdot \mathbf {v} ,} with v = d x / d t {\displaystyle \mathbf {v} =\mathrm {d} \mathbf {x} /\mathrm {d} t} the velocity. === Potential energy === Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r ) {\displaystyle U(\mathbf {r} )} is defined as that field whose gradient is equal and opposite to the force produced at every point: F = − ∇ U . {\displaystyle \mathbf {F} =-\mathbf {\nabla } U.} Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.: ch.12  === Conservation === A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.: ch.12  Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r {\displaystyle \mathbf {r} } emanating from spherically symmetric potentials. Examples of this follow: For gravity: F g = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{g}}=-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where G {\displaystyle G} is the gravitational constant, and m n {\displaystyle m_{n}} is the mass of object n. For electrostatic forces: F e = q 1 q 2 4 π ε 0 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{e}}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}r^{2}}}{\hat {\mathbf {r} }},} where ε 0 {\displaystyle \varepsilon _{0}} is electric permittivity of free space, and q n {\displaystyle q_{n}} is the electric charge of object n. For spring forces: F s = − k r r ^ , {\displaystyle \mathbf {F} _{\text{s}}=-kr{\hat {\mathbf {r} }},} where k {\displaystyle k} is the spring constant.: ch.12  For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.: ch.12  The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.: ch.12  == Units == The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. See also Ton-force. == Revisions of the force concept == At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly. === Special theory of relativity === In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's second law, F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} remains valid because it is a mathematical definition.: 855–876  But for momentum to be conserved at relativistic relative velocity, v {\displaystyle v} , momentum must be redefined as: p = m 0 v 1 − v 2 / c 2 , {\displaystyle \mathbf {p} ={\frac {m_{0}\mathbf {v} }{\sqrt {1-v^{2}/c^{2}}}},} where m 0 {\displaystyle m_{0}} is the rest mass and c {\displaystyle c} the speed of light. The expression relating force and acceleration for a particle with constant non-zero rest mass m {\displaystyle m} moving in the x {\displaystyle x} direction at velocity v {\displaystyle v} is:: 216  F = ( γ 3 m a x , γ m a y , γ m a z ) , {\displaystyle \mathbf {F} =\left(\gamma ^{3}ma_{x},\gamma ma_{y},\gamma ma_{z}\right),} where γ = 1 1 − v 2 / c 2 . {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}.} is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\displaystyle c} .: 26 : §15–8  If v {\displaystyle v} is very small compared to c {\displaystyle c} , then γ {\displaystyle \gamma } is very close to 1 and F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } is a close approximation. Even for use in relativity, one can restore the form of F μ = m A μ {\displaystyle F^{\mu }=mA^{\mu }} through the use of four-vectors. This relation is correct in relativity when F μ {\displaystyle F^{\mu }} is the four-force, m {\displaystyle m} is the invariant mass, and A μ {\displaystyle A^{\mu }} is the four-acceleration. The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below. === Quantum mechanics === Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability. === Quantum field theory === In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".: 199–128  While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. == Fundamental interactions == All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.: 12-11 : 359  The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. === Gravitational === Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact. Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force". === Electromagnetic === Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force. === Strong nuclear === There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.: 940  The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.: 232  === Weak nuclear === Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity.: 951  This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.: 201  == See also == Contact force – Force between two objects that are in physical contact Force control – Force control is given by the machine Force gauge – Instrument for measuring force Orders of magnitude (force) – Comparison of a wide range of physical forces Parallel force system – Situation in mechanical engineering Rigid body – Physical object which does not deform when forces or moments are exerted on it Specific force – Concept in physics == References == == External links == "Classical Mechanics, Week 2: Newton's Laws". MIT OpenCourseWare. Retrieved 2023-08-09. "Fundamentals of Physics I, Lecture 3: Newton's Laws of Motion". Open Yale Courses. Retrieved 2023-08-09.
Wikipedia/Forces
A high-frequency approximation (or "high energy approximation") for scattering or other wave propagation problems, in physics or engineering, is an approximation whose accuracy increases with the size of features on the scatterer or medium relative to the wavelength of the scattered particles. Classical mechanics and geometric optics are the most common and extreme high frequency approximation, where the wave or field properties of, respectively, quantum mechanics and electromagnetism are neglected entirely. Less extreme approximations include, the WKB approximation, physical optics, the geometric theory of diffraction, the uniform theory of diffraction, and the physical theory of diffraction. When these are used to approximate quantum mechanics, they are called semiclassical approximations. == See also == Electromagnetic modeling == References ==
Wikipedia/High_frequency_approximation
In physics, energy density is the quotient between the amount of energy stored in a given system or contained in a given region of space and the volume of the system or region considered. Often only the useful or extractable energy is measured. It is sometimes confused with stored energy per unit mass, which is called specific energy or gravimetric energy density. There are different types of energy stored, corresponding to a particular type of reaction. In order of the typical magnitude of the energy stored, examples of reactions are: nuclear, chemical (including electrochemical), electrical, pressure, material deformation or in electromagnetic fields. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles from the combustion of gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈ 15 kg of air). Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide. Electrochemical reactions are used by devices such as laptop computers and mobile phones to release energy from batteries. Energy per unit volume has the same physical units as pressure, and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. The energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached. In cosmological and other contexts in general relativity, the energy densities considered relate to the elements of the stress–energy tensor and therefore do include the rest mass energy as well as energy densities associated with pressure. == Chemical energy == When discussing the chemical energy contained, there are different types which can be quantified depending on the intended purpose. One is the theoretical total amount of thermodynamic work that can be derived from a system, at a given temperature and pressure imposed by the surroundings, called exergy. Another is the theoretical amount of electrical energy that can be derived from reactants that are at room temperature and atmospheric pressure. This is given by the change in standard Gibbs free energy. But as a source of heat or for use in a heat engine, the relevant quantity is the change in standard enthalpy or the heat of combustion. There are two kinds of heat of combustion: The higher value (HHV), or gross heat of combustion, includes all the heat released as the products cool to room temperature and whatever water vapor is present condenses. The lower value (LHV), or net heat of combustion, does not include the heat which could be released by condensing water vapor, and may not include the heat released on cooling all the way down to room temperature. A convenient table of HHV and LHV of some fuels can be found in the references. === In energy storage and fuels === For energy storage, the energy density relates the stored energy to the volume of the storage equipment, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy of a fuel per unit mass is called its specific energy. The adjacent figure shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Some values may not be precise because of isomers or other irregularities. The heating values of the fuel describe their specific energies more comprehensively. The density values for chemical fuels do not include the weight of the oxygen required for combustion. The atomic weights of carbon and oxygen are similar, while hydrogen is much lighter. Figures are presented in this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that contain their own oxidizer (such as gunpowder and TNT), where the mass of the oxidizer in effect adds weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite. Given the high energy density of gasoline, the exploration of alternative media to store the energy of powering a car, such as hydrogen or battery, is strongly limited by the energy density of the alternative medium. The same mass of lithium-ion storage, for example, would result in a car with only 2% the range of its gasoline counterpart. If sacrificing the range is undesirable, much more storage volume is necessary. Alternative options are discussed for energy storage to increase energy density and decrease charging time, such as supercapacitors. No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly it is pulled out. === Efficiency === In general an engine will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion. Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Large scale, intensive energy use impacts and is impacted by climate, waste storage, and environmental consequences. == Nuclear energy == The greatest energy source by far is matter itself, according to the mass–energy equivalence. This energy is described by E = mc2, where c is the speed of light. In terms of density, m = ρV, where ρ is the volumetric mass density, V is the volume occupied by the mass. This energy can be released by the processes of nuclear fission (~ 0.1%), nuclear fusion (~ 1%), or the annihilation of some or all of the matter in the volume V by matter–antimatter collisions (100%). The most effective ways of accessing this energy, aside from antimatter, are fusion and fission. Fusion is the process by which the sun produces energy which will be available for billions of years (in the form of sunlight and heat). However as of 2024, sustained fusion power production continues to be elusive. Power from fission in nuclear power plants (using uranium and thorium) will be available for at least many decades or even centuries because of the plentiful supply of the elements on earth, though the full potential of this source can only be realized through breeder reactors, which are, apart from the BN-600 reactor, not yet used commercially. === Fission reactors === Nuclear fuels typically have volumetric energy densities at least tens of thousands of times higher than chemical fuels. A 1 inch tall uranium fuel pellet is equivalent to about 1 ton of coal, 120 gallons of crude oil, or 17,000 cubic feet of natural gas. In light-water reactors, 1 kg of natural uranium – following a corresponding enrichment and used for power generation– is equivalent to the energy content of nearly 10,000 kg of mineral oil or 14,000 kg of coal. Comparatively, coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density. The density of thermal energy contained in the core of a light-water reactor (pressurized water reactor (PWR) or boiling water reactor (BWR)) of typically 1 GW (1000 MW electrical corresponding to ≈ 3000 MW thermal) is in the range of 10 to 100 MW of thermal energy per cubic meter of cooling water depending on the location considered in the system (the core itself (≈ 30 m3), the reactor pressure vessel (≈ 50 m3), or the whole primary circuit (≈ 300 m3)). This represents a considerable density of energy that requires a continuous water flow at high velocity at all times in order to remove heat from the core, even after an emergency shutdown of the reactor. The incapacity to cool the cores of three BWRs at Fukushima after the 2011 tsunami and the resulting loss of external electrical power and cold source caused the meltdown of the three cores in only a few hours, even though the three reactors were correctly shut down just after the Tōhoku earthquake. This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's. === Antimatter–matter annihilation === Because antimatter–matter interactions result in complete conversion of the rest mass to radiant energy, the energy density of this reaction depends on the density of the matter and antimatter used. A neutron star would approximate the most dense system capable of matter-antimatter annihilation. A black hole, although denser than a neutron star, does not have an equivalent anti-particle form, but would offer the same 100% conversion rate of mass to energy in the form of Hawking radiation. Even in the case of relatively small black holes (smaller than astronomical objects) the power output would be tremendous. == Electric and magnetic fields == Electric and magnetic fields can store energy and its density relates to the strength of the fields within a given volume. This (volumetric) energy density is given by u = ε 2 E 2 + 1 2 μ B 2 {\displaystyle u={\frac {\varepsilon }{2}}\mathbf {E} ^{2}+{\frac {1}{2\mu }}\mathbf {B} ^{2}} where E is the electric field, B is the magnetic field, and ε and µ are the permittivity and permeability of the surroundings respectively. The SI unit is the joule per cubic metre. In ideal (linear and nondispersive) substances, the energy density is u = 1 2 ( E ⋅ D + H ⋅ B ) {\displaystyle u={\frac {1}{2}}(\mathbf {E} \cdot \mathbf {D} +\mathbf {H} \cdot \mathbf {B} )} where D is the electric displacement field and H is the magnetizing field. In the case of absence of magnetic fields, by exploiting Fröhlich's relationships it is also possible to extend these equations to anisotropic and nonlinear dielectrics, as well as to calculate the correlated Helmholtz free energy and entropy densities. In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma. === Pulsed sources === When a pulsed laser impacts a surface, the radiant exposure, i.e. the energy deposited per unit of surface, may also be called energy density or fluence. == Table of material energy densities == The following unit conversions may be helpful when considering the data in the tables: 3.6 MJ = 1 kW⋅h ≈ 1.34 hp⋅h. Since 1 J = 10−6 MJ and 1 m3 = 103 L, divide joule/m3 by 109 to get MJ/L = GJ/m3. Divide MJ/L by 3.6 to get kW⋅h/L. === Chemical reactions (oxidation) === Unless otherwise stated, the values in the following table are lower heating values for perfect combustion, not counting oxidizer mass or volume. When used to produce electricity in a fuel cell or to do work, it is the Gibbs free energy of reaction (ΔG) that sets the theoretical upper limit. If the produced H2O is vapor, this is generally greater than the lower heat of combustion, whereas if the produced H2O is liquid, it is generally less than the higher heat of combustion. But in the most relevant case of hydrogen, ΔG is 113 MJ/kg if water vapor is produced, and 118 MJ/kg if liquid water is produced, both being less than the lower heat of combustion (120 MJ/kg). === Electrochemical reactions (batteries) === ==== Common battery formats ==== === Nuclear reactions === === In material deformation === The mechanical energy storage capacity, or resilience, of a Hookean material when it is deformed to the point of failure can be computed by calculating tensile strength times the maximum elongation dividing by two. The maximum elongation of a Hookean material can be computed by dividing stiffness of that material by its ultimate tensile strength. The following table lists these values computed using the Young's modulus as measure of stiffness: === Other release mechanisms === == See also == == References == == Further reading == The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998) ISBN 0-201-32840-2 Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000) ISBN 0-521-57598-2 Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964 ^ "Aircraft Fuels". Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259 "Fuels of the Future for Cars and Trucks" – Dr. James J. Eberhardt – Energy Efficiency and Renewable Energy, U.S. Department of Energy – 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002 "Heat values of various fuels – World Nuclear Association". www.world-nuclear.org. Retrieved 4 November 2018.
Wikipedia/Energy_density
Energy flux is the rate of transfer of energy through a surface. The quantity is defined in two different ways, depending on the context: Total rate of energy transfer (not per unit area); SI units: W = J⋅s−1. Specific rate of energy transfer (total normalized per unit area); SI units: W⋅m−2 = J⋅m−2⋅s−1: This is a vector quantity, its components being determined in terms of the normal (perpendicular) direction to the surface of measurement. This is sometimes called energy flux density, to distinguish it from the first definition. Radiative flux, heat flux, and sound energy flux density (also sound intensity) are specific cases of this meaning. == See also == Energy flow (ecology) Flux Irradiance Poynting vector Stress–energy tensor Energy current == References ==
Wikipedia/Energy_flux
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, the Schrödinger equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy. In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming. == Overview == The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for a system of particles at coordinates ⁠ q {\displaystyle \mathbf {q} } ⁠. The function H {\displaystyle H} is the system's Hamiltonian giving the system's energy. The solution of this equation is the action, ⁠ S {\displaystyle S} ⁠, called Hamilton's principal function.: 291  The solution can be related to the system Lagrangian L {\displaystyle \ {\mathcal {L}}\ } by an indefinite integral of the form used in the principle of least action:: 431  S = ∫ L d t + s o m e c o n s t a n t {\displaystyle \ S=\int {\mathcal {L}}\ \mathrm {d} t+~{\mathsf {some\ constant}}~} Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.: 175  == Mathematical formulation == === Notation === Boldface variables such as q {\displaystyle \mathbf {q} } represent a list of N {\displaystyle N} generalized coordinates, q = ( q 1 , q 2 , … , q N − 1 , q N ) {\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})} A dot over a variable or list signifies the time derivative (see Newton's notation). For example, q ˙ = d q d t . {\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.} The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as p ⋅ q = ∑ k = 1 N p k q k . {\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.} === The action functional (a.k.a. Hamilton's principal function) === ==== Definition ==== Let the Hessian matrix H L ( q , q ˙ , t ) = { ∂ 2 L / ∂ q ˙ i ∂ q ˙ j } i j {\textstyle H_{\mathcal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\mathcal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}} be invertible. The relation d d t ∂ L ∂ q ˙ i = ∑ j = 1 n ( ∂ 2 L ∂ q ˙ i ∂ q ˙ j q ¨ j + ∂ 2 L ∂ q ˙ i ∂ q j q ˙ j ) + ∂ 2 L ∂ q ˙ i ∂ t , i = 1 , … , n , {\displaystyle {\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,} shows that the Euler–Lagrange equations form a n × n {\displaystyle n\times n} system of second-order ordinary differential equations. Inverting the matrix H L {\displaystyle H_{\mathcal {L}}} transforms this system into q ¨ i = F i ( q , q ˙ , t ) , i = 1 , … , n . {\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.} Let a time instant t 0 {\displaystyle t_{0}} and a point q 0 ∈ M {\displaystyle \mathbf {q} _{0}\in M} in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every v 0 , {\displaystyle \mathbf {v} _{0},} the initial value problem with the conditions γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ ˙ | τ = t 0 = v 0 {\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}} has a locally unique solution γ = γ ( τ ; t 0 , q 0 , v 0 ) . {\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).} Additionally, let there be a sufficiently small time interval ( t 0 , t 1 ) {\displaystyle (t_{0},t_{1})} such that extremals with different initial velocities v 0 {\displaystyle \mathbf {v} _{0}} would not intersect in M × ( t 0 , t 1 ) . {\displaystyle M\times (t_{0},t_{1}).} The latter means that, for any q ∈ M {\displaystyle \mathbf {q} \in M} and any t ∈ ( t 0 , t 1 ) , {\displaystyle t\in (t_{0},t_{1}),} there can be at most one extremal γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} for which γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} Substituting γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} into the action functional results in the Hamilton's principal function (HPF) where γ = γ ( τ ; t , t 0 , q , q 0 ) , {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),} γ | τ = t 0 = q 0 , {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},} γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} === Formula for the momenta === The momenta are defined as the quantities p i ( q , q ˙ , t ) = ∂ L / ∂ q ˙ i . {\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}.} This section shows that the dependency of p i {\displaystyle p_{i}} on q ˙ {\displaystyle \mathbf {\dot {q}} } disappears, once the HPF is known. Indeed, let a time instant t 0 {\displaystyle t_{0}} and a point q 0 {\displaystyle \mathbf {q} _{0}} in the configuration space be fixed. For every time instant t {\displaystyle t} and a point q , {\displaystyle \mathbf {q} ,} let γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} be the (unique) extremal from the definition of the Hamilton's principal function ⁠ S {\displaystyle S} ⁠. Call v = def γ ˙ ( τ ; t , t 0 , q , q 0 ) | τ = t {\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}} the velocity at ⁠ τ = t {\displaystyle \tau =t} ⁠. Then === Formula === Given the Hamiltonian H ( q , p , t ) {\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)} of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function S {\displaystyle S} , Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating S {\displaystyle S} as the generating function for a canonical transformation of the classical Hamiltonian H = H ( q 1 , q 2 , … , q N ; p 1 , p 2 , … , p N ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).} The conjugate momenta correspond to the first derivatives of S {\displaystyle S} with respect to the generalized coordinates p k = ∂ S ∂ q k . {\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.} As a solution to the Hamilton–Jacobi equation, the principal function contains N + 1 {\displaystyle N+1} undetermined constants, the first N {\displaystyle N} of them denoted as α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , and the last one coming from the integration of ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} . The relationship between p {\displaystyle \mathbf {p} } and q {\displaystyle \mathbf {q} } then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities β k = ∂ S ∂ α k , k = 1 , 2 , … , N {\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N} are also constants of motion, and these equations can be inverted to find q {\displaystyle \mathbf {q} } as a function of all the α {\displaystyle \alpha } and β {\displaystyle \beta } constants and time. == Comparison with other formulations of mechanics == The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the N {\displaystyle N} generalized coordinates q 1 , q 2 , … , q N {\displaystyle q_{1},\,q_{2},\dots ,q_{N}} and the time t {\displaystyle t} . The generalized momenta do not appear, except as derivatives of S {\displaystyle S} , the classical action. For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of N {\displaystyle N} , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta p 1 , p 2 , … , p N {\displaystyle p_{1},\,p_{2},\dots ,p_{N}} . Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.: 444  == Derivation using a canonical transformation == Any canonical transformation involving a type-2 generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} leads to the relations p = ∂ G 2 ∂ q , Q = ∂ G 2 ∂ P , K ( Q , P , t ) = H ( q , p , t ) + ∂ G 2 ∂ t {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }},\quad \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }},\quad \\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}} and Hamilton's equations in terms of the new variables P , Q {\displaystyle \mathbf {P} ,\,\mathbf {Q} } and new Hamiltonian K {\displaystyle K} have the same form: P ˙ = − ∂ K ∂ Q , Q ˙ = + ∂ K ∂ P . {\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.} To derive the HJE, a generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} is chosen in such a way that, it will make the new Hamiltonian K = 0 {\displaystyle K=0} . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial P ˙ = Q ˙ = 0 {\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0} so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta P {\displaystyle \mathbf {P} } are usually denoted α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , i.e. P m = α m {\displaystyle P_{m}=\alpha _{m}} and the new generalized coordinates Q {\displaystyle \mathbf {Q} } are typically denoted as β 1 , β 2 , … , β N {\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}} , so Q m = β m {\displaystyle Q_{m}=\beta _{m}} . Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant A {\displaystyle A} : G 2 ( q , α , t ) = S ( q , t ) + A , {\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,} the HJE automatically arises p = ∂ G 2 ∂ q = ∂ S ∂ q → H ( q , p , t ) + ∂ G 2 ∂ t = 0 → H ( q , ∂ S ∂ q , t ) + ∂ S ∂ t = 0. {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\\[1ex]\rightarrow {}&H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\\[1ex]\rightarrow {}&H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)}+{\partial S \over \partial t}=0.\end{aligned}}} When solved for S ( q , α , t ) {\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)} , these also give us the useful equations Q = β = ∂ S ∂ α , {\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},} or written in components for clarity Q m = β m = ∂ S ( q , α , t ) ∂ α m . {\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.} Ideally, these N equations can be inverted to find the original generalized coordinates q {\displaystyle \mathbf {q} } as a function of the constants α , β , {\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},} and t {\displaystyle t} , thus solving the original problem. == Separation of variables == When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} in the HJE must be a constant, usually denoted ( − E {\displaystyle -E} ), giving the separated solution S = W ( q 1 , q 2 , … , q N ) − E t {\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et} where the time-independent function W ( q ) {\displaystyle W(\mathbf {q} )} is sometimes called the abbreviated action or Hamilton's characteristic function : 434  and sometimes: 607  written S 0 {\displaystyle S_{0}} (see action principle names). The reduced Hamilton–Jacobi equation can then be written H ( q , ∂ S ∂ q ) = E . {\displaystyle H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)}=E.} To illustrate separability for other variables, a certain generalized coordinate q k {\displaystyle q_{k}} and its derivative ∂ S ∂ q k {\displaystyle {\frac {\partial S}{\partial q_{k}}}} are assumed to appear together as a single function ψ ( q k , ∂ S ∂ q k ) {\displaystyle \psi {\left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}} in the Hamiltonian H = H ( q 1 , q 2 , … , q k − 1 , q k + 1 , … , q N ; p 1 , p 2 , … , p k − 1 , p k + 1 , … , p N ; ψ ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).} In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates S = S k ( q k ) + S rem ( q 1 , … , q k − 1 , q k + 1 , … , q N , t ) . {\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).} Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as Γ k {\displaystyle \Gamma _{k}} ), yielding a first-order ordinary differential equation for S k ( q k ) , {\displaystyle S_{k}(q_{k}),} ψ ( q k , d S k d q k ) = Γ k . {\displaystyle \psi {\left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)}=\Gamma _{k}.} In fortunate cases, the function S {\displaystyle S} can be separated completely into N {\displaystyle N} functions S m ( q m ) , {\displaystyle S_{m}(q_{m}),} S = S 1 ( q 1 ) + S 2 ( q 2 ) + ⋯ + S N ( q N ) − E t . {\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.} In such a case, the problem devolves to N {\displaystyle N} ordinary differential equations. The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, S {\displaystyle S} will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections. === Examples in various coordinate systems === ==== Spherical coordinates ==== In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written H = 1 2 m [ p r 2 + p θ 2 r 2 + p ϕ 2 r 2 sin 2 ⁡ θ ] + U ( r , θ , ϕ ) . {\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions U r ( r ) , U θ ( θ ) , U ϕ ( ϕ ) {\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )} such that U {\displaystyle U} can be written in the analogous form U ( r , θ , ϕ ) = U r ( r ) + U θ ( θ ) r 2 + U ϕ ( ϕ ) r 2 sin 2 ⁡ θ . {\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.} Substitution of the completely separated solution S = S r ( r ) + S θ ( θ ) + S ϕ ( ϕ ) − E t {\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et} into the HJE yields 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ ( d S θ d θ ) 2 + 2 m U θ ( θ ) ] + 1 2 m r 2 sin 2 ⁡ θ [ ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.} This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for ϕ {\displaystyle \phi } ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) = Γ ϕ {\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }} where Γ ϕ {\displaystyle \Gamma _{\phi }} is a constant of the motion that eliminates the ϕ {\displaystyle \phi } dependence from the Hamilton–Jacobi equation 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[{\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }\right]=E.} The next ordinary differential equation involves the θ {\displaystyle \theta } generalized coordinate 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ = Γ θ {\displaystyle {\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }=\Gamma _{\theta }} where Γ θ {\displaystyle \Gamma _{\theta }} is again a constant of the motion that eliminates the θ {\displaystyle \theta } dependence and reduces the HJE to the final ordinary differential equation 1 2 m ( d S r d r ) 2 + U r ( r ) + Γ θ 2 m r 2 = E {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E} whose integration completes the solution for S {\displaystyle S} . ==== Elliptic cylindrical coordinates ==== The Hamiltonian in elliptic cylindrical coordinates can be written H = p μ 2 + p ν 2 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) + p z 2 2 m + U ( μ , ν , z ) {\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)} where the foci of the ellipses are located at ± a {\displaystyle \pm a} on the x {\displaystyle x} -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( μ , ν , z ) = U μ ( μ ) + U ν ( ν ) sinh 2 ⁡ μ + sin 2 ⁡ ν + U z ( z ) {\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)} where U μ ( μ ) {\displaystyle U_{\mu }(\mu )} , U ν ( ν ) {\displaystyle U_{\nu }(\nu )} and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S μ ( μ ) + S ν ( ν ) + S z ( z ) − E t {\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) [ ( d S μ d μ ) 2 + ( d S ν d ν ) 2 ] + U z ( z ) + 1 sinh 2 ⁡ μ + sin 2 ⁡ ν [ U μ ( μ ) + U ν ( ν ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sinh ^{2}\mu +\sin ^{2}\nu }}\left[U_{\mu }(\mu )+U_{\nu }(\nu )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S μ d μ ) 2 + ( d S ν d ν ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 U ν ( ν ) = 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S μ d μ ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 ( Γ z − E ) sinh 2 ⁡ μ = Γ μ ( d S ν d ν ) 2 + 2 m a 2 U ν ( ν ) + 2 m a 2 ( Γ z − E ) sin 2 ⁡ ν = Γ ν {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}&\,+\,&2ma^{2}U_{\mu }(\mu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu &=\,&\Gamma _{\mu }\\\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}&\,+\,&2ma^{2}U_{\nu }(\nu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu &=\,&\Gamma _{\nu }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . ==== Parabolic cylindrical coordinates ==== The Hamiltonian in parabolic cylindrical coordinates can be written H = p σ 2 + p τ 2 2 m ( σ 2 + τ 2 ) + p z 2 2 m + U ( σ , τ , z ) . {\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( σ , τ , z ) = U σ ( σ ) + U τ ( τ ) σ 2 + τ 2 + U z ( z ) {\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)} where U σ ( σ ) {\displaystyle U_{\sigma }(\sigma )} , U τ ( τ ) {\displaystyle U_{\tau }(\tau )} , and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S σ ( σ ) + S τ ( τ ) + S z ( z ) − E t + constant {\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m ( σ 2 + τ 2 ) [ ( d S σ d σ ) 2 + ( d S τ d τ ) 2 ] + U z ( z ) + 1 σ 2 + τ 2 [ U σ ( σ ) + U τ ( τ ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sigma ^{2}+\tau ^{2}}}\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S σ d σ ) 2 + ( d S τ d τ ) 2 + 2 m [ U σ ( σ ) + U τ ( τ ) ] = 2 m ( σ 2 + τ 2 ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2m\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S σ d σ ) 2 + 2 m U σ ( σ ) + 2 m σ 2 ( Γ z − E ) = Γ σ ( d S τ d τ ) 2 + 2 m U τ ( τ ) + 2 m τ 2 ( Γ z − E ) = Γ τ {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}&+\,&2mU_{\sigma }(\sigma )&+\,&2m\sigma ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\sigma }\\\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}&+\,&2mU_{\tau }(\tau )&+\,&2m\tau ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\tau }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . == Waves and particles == === Optical wave fronts and trajectories === The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface C t {\textstyle {\mathcal {C}}_{t}} that the light emitted at time t = 0 {\textstyle t=0} has reached at time t {\textstyle t} . Light rays and wave fronts are dual: if one is known, the other can be deduced. More precisely, geometrical optics is a variational problem where the “action” is the travel time T {\textstyle T} along a path, T = 1 c ∫ A B n d s {\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds} where n {\textstyle n} is the medium's index of refraction and d s {\textstyle ds} is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other. The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation. The wave front at time t {\textstyle t} , for a system initially at q 0 {\textstyle \mathbf {q} _{0}} at time t 0 {\textstyle t_{0}} , is defined as the collection of points q {\textstyle \mathbf {q} } such that S ( q , t ) = const {\textstyle S(\mathbf {q} ,t)={\text{const}}} . If S ( q , t ) {\textstyle S(\mathbf {q} ,t)} is known, the momentum is immediately deduced. p = ∂ S ∂ q . {\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.} Once p {\textstyle \mathbf {p} } is known, tangents to the trajectories q ˙ {\textstyle {\dot {\mathbf {q} }}} are computed by solving the equation ∂ L ∂ q ˙ = p {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}} for q ˙ {\textstyle {\dot {\mathbf {q} }}} , where L {\textstyle {\mathcal {L}}} is the Lagrangian. The trajectories are then recovered from the knowledge of q ˙ {\textstyle {\dot {\mathbf {q} }}} . === Relationship to the Schrödinger equation === The isosurfaces of the function S ( q , t ) {\displaystyle S(\mathbf {q} ,t)} can be determined at any time t. The motion of an S {\displaystyle S} -isosurface as a function of time is defined by the motions of the particles beginning at the points q {\displaystyle \mathbf {q} } on the isosurface. The motion of such an isosurface can be thought of as a wave moving through q {\displaystyle \mathbf {q} } -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave ψ = ψ 0 e i S / ℏ {\displaystyle \psi =\psi _{0}e^{iS/\hbar }} where ℏ {\displaystyle \hbar } is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having S {\displaystyle S} be a complex number. The Hamilton–Jacobi equation is then rewritten as ℏ 2 2 m ∇ 2 ψ − U ψ = ℏ i ∂ ψ ∂ t {\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}} which is the Schrödinger equation. Conversely, starting with the Schrödinger equation and our ansatz for ψ {\displaystyle \psi } , it can be deduced that 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = i ℏ 2 m ∇ 2 ψ 0 ψ 0 . {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}{\frac {\nabla ^{2}\psi _{0}}{\psi _{0}}}.} The classical limit ( ℏ → 0 {\displaystyle \hbar \rightarrow 0} ) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation, 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = 0. {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.} == Applications == === HJE in a gravitational field === Using the energy–momentum relation in the form g α β P α P β − ( m c ) 2 = 0 {\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0} for a particle of rest mass m {\displaystyle m} travelling in curved space, where g α β {\displaystyle g^{\alpha \beta }} are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and c {\displaystyle c} is the speed of light. Setting the four-momentum P α {\displaystyle P_{\alpha }} equal to the four-gradient of the action S {\displaystyle S} , P α = − ∂ S ∂ x α {\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}} gives the Hamilton–Jacobi equation in the geometry determined by the metric g {\displaystyle g} : g α β ∂ S ∂ x α ∂ S ∂ x β − ( m c ) 2 = 0 , {\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,} in other words, in a gravitational field. === HJE in electromagnetic fields === For a particle of rest mass m {\displaystyle m} and electric charge e {\displaystyle e} moving in electromagnetic field with four-potential A i = ( ϕ , A ) {\displaystyle A_{i}=(\phi ,\mathrm {A} )} in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor g i k = g i k {\displaystyle g^{ik}=g_{ik}} has a form g i k ( ∂ S ∂ x i + e c A i ) ( ∂ S ∂ x k + e c A k ) = m 2 c 2 {\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}} and can be solved for the Hamilton principal action function S {\displaystyle S} to obtain further solution for the particle trajectory and momentum: x = − e c γ ∫ A z d ξ , y = − e c γ ∫ A y d ξ , z = − e 2 2 c 2 γ 2 ∫ ( A 2 − A 2 ¯ ) d ξ , ξ = c t − e 2 2 γ 2 c 2 ∫ ( A 2 − A 2 ¯ ) d ξ , p x = − e c A x , p y = − e c A y , p z = e 2 2 γ c ( A 2 − A 2 ¯ ) , E = c γ + e 2 2 γ c ( A 2 − A 2 ¯ ) , {\displaystyle {\begin{aligned}x&=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,&y&=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,\\[1ex]z&=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,&\xi &=ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,\\[1ex]p_{x}&=-{\frac {e}{c}}A_{x},&p_{y}&=-{\frac {e}{c}}A_{y},\\[1ex]p_{z}&={\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),&{\mathcal {E}}&=c\gamma +{\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),\end{aligned}}} where ξ = c t − z {\displaystyle \xi =ct-z} and γ 2 = m 2 c 2 + e 2 c 2 A ¯ 2 {\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}} with A ¯ {\displaystyle {\overline {\mathbf {A} }}} the cycle average of the vector potential. ==== A circularly polarized wave ==== In the case of circular polarization, E x = E 0 sin ⁡ ω ξ 1 , E y = E 0 cos ⁡ ω ξ 1 , A x = c E 0 ω cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 . {\displaystyle {\begin{aligned}E_{x}&=E_{0}\sin \omega \xi _{1},&E_{y}&=E_{0}\cos \omega \xi _{1},\\[1ex]A_{x}&={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.\end{aligned}}} Hence x = − e c E 0 ω sin ⁡ ω ξ 1 , y = − e c E 0 ω cos ⁡ ω ξ 1 , p x = − e E 0 ω cos ⁡ ω ξ 1 , p y = e E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}x&=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},&y&=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},\\[1ex]p_{x}&=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},&p_{y}&={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} where ξ 1 = ξ / c {\displaystyle \xi _{1}=\xi /c} , implying the particle moving along a circular trajectory with a permanent radius e c E 0 / γ ω 2 {\displaystyle ecE_{0}/\gamma \omega ^{2}} and an invariable value of momentum e E 0 / ω 2 {\displaystyle eE_{0}/\omega ^{2}} directed along a magnetic field vector. ==== A monochromatic linearly polarized plane wave ==== For the flat, monochromatic, linearly polarized wave with a field E {\displaystyle E} directed along the axis y {\displaystyle y} E y = E 0 cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}E_{y}&=E_{0}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} hence x = const , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e c E 0 γ ω 2 , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e E 0 8 γ ω , γ 2 = m 2 c 2 + e 2 E 0 2 2 ω 2 , {\displaystyle {\begin{aligned}x&={\text{const}},\\[1ex]y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},\\[1ex]z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {eE_{0}}{8\gamma \omega }},\\[1ex]\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e E 0 ω , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 {\displaystyle {\begin{aligned}p_{x}&=0,\\[1ex]p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {eE_{0}}{\omega }},\\[1ex]p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}\end{aligned}}} implying the particle figure-8 trajectory with a long its axis oriented along the electric field E {\displaystyle E} vector. ==== An electromagnetic wave with a solenoidal magnetic field ==== For the electromagnetic wave with axial (solenoidal) magnetic field: E = E ϕ = ω ρ 0 c B 0 cos ⁡ ω ξ 1 , {\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},} A ϕ = − ρ 0 B 0 sin ⁡ ω ξ 1 = − L s π ρ 0 N s I 0 sin ⁡ ω ξ 1 , {\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},} hence x = constant , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e ρ 0 B 0 γ ω , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e ρ 0 B 0 8 c γ , γ 2 = m 2 c 2 + e 2 ρ 0 2 B 0 2 2 c 2 , {\displaystyle {\begin{aligned}x&={\text{constant}},\\y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},\\z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {e\rho _{0}B_{0}}{8c\gamma }},\\\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e ρ 0 B 0 c , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 , {\displaystyle {\begin{aligned}p_{x}&=0,\\p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {e\rho _{0}B_{0}}{c}},\\p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},\end{aligned}}} where B 0 {\displaystyle B_{0}} is the magnetic field magnitude in a solenoid with the effective radius ρ 0 {\displaystyle \rho _{0}} , inductivity L s {\displaystyle L_{s}} , number of windings N s {\displaystyle N_{s}} , and an electric current magnitude I 0 {\displaystyle I_{0}} through the solenoid windings. The particle motion occurs along the figure-8 trajectory in y z {\displaystyle yz} plane set perpendicular to the solenoid axis with arbitrary azimuth angle φ {\displaystyle \varphi } due to axial symmetry of the solenoidal magnetic field. == See also == == References == == Further reading == Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3. Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826. Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518. Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8. Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier. Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5. Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243.
Wikipedia/Hamilton–Jacobi_theory
Centrifugal force is a fictitious force in Newtonian mechanics (also called an "inertial" or "pseudo" force) that appears to act on all objects when viewed in a rotating frame of reference. It appears to be directed radially away from the axis of rotation of the frame. The magnitude of the centrifugal force F on an object of mass m at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is F = m ω 2 ρ {\textstyle F=m\omega ^{2}\rho } . This fictitious force is often applied to rotating devices, such as centrifuges, centrifugal pumps, centrifugal governors, and centrifugal clutches, and in centrifugal railways, planetary orbits and banked curves, when they are analyzed in a non–inertial reference frame such as a rotating coordinate system. The term has sometimes also been used for the reactive centrifugal force, a real frame-independent Newtonian force that exists as a reaction to a centripetal force in some scenarios. == History == From 1659, the Neo-Latin term vi centrifuga ("centrifugal force") is attested in Christiaan Huygens' notes and letters. Note, that in Latin centrum means "center" and ‑fugus (from fugiō) means "fleeing, avoiding". Thus, centrifugus means "fleeing from the center" in a literal translation. In 1673, in Horologium Oscillatorium, Huygens writes (as translated by Richard J. Blackwell): There is another kind of oscillation in addition to the one we have examined up to this point; namely, a motion in which a suspended weight is moved around through the circumference of a circle. From this we were led to the construction of another clock at about the same time we invented the first one. [...] I originally intended to publish here a lengthy description of these clocks, along with matters pertaining to circular motion and centrifugal force, as it might be called, a subject about which I have more to say than I am able to do at present. But, in order that those interested in these things can sooner enjoy these new and not useless speculations, and in order that their publication not be prevented by some accident, I have decided, contrary to my plan, to add this fifth part [...]. The same year, Isaac Newton received Huygens work via Henry Oldenburg and replied "I pray you return [Mr. Huygens] my humble thanks [...] I am glad we can expect another discourse of the vis centrifuga, which speculation may prove of good use in natural philosophy and astronomy, as well as mechanics". In 1687, in Principia, Newton further develops vis centrifuga ("centrifugal force"). Around this time, the concept is also further evolved by Newton, Gottfried Wilhelm Leibniz, and Robert Hooke. In the late 18th century, the modern conception of the centrifugal force evolved as a "fictitious force" arising in a rotating reference. Centrifugal force has also played a role in debates in classical mechanics about detection of absolute motion. Newton suggested two arguments to answer the question of whether absolute rotation can be detected: the rotating bucket argument, and the rotating spheres argument. According to Newton, in each scenario the centrifugal force would be observed in the object's local frame (the frame where the object is stationary) only if the frame were rotating with respect to absolute space. Around 1883, Mach's principle was proposed where, instead of absolute rotation, the motion of the distant stars relative to the local inertial frame gives rise through some (hypothetical) physical law to the centrifugal force and other inertia effects. Today's view is based upon the idea of an inertial frame of reference, which privileges observers for which the laws of physics take on their simplest form, and in particular, frames that do not use centrifugal forces in their equations of motion in order to describe motions correctly. Around 1914, the analogy between centrifugal force (sometimes used to create artificial gravity) and gravitational forces led to the equivalence principle of general relativity. == Introduction == Centrifugal force is an outward force apparent in a rotating reference frame. It does not exist when a system is described relative to an inertial frame of reference. All measurements of position and velocity must be made relative to some frame of reference. For example, an analysis of the motion of an object in an airliner in flight could be made relative to the airliner, to the surface of the Earth, or even to the Sun. A reference frame that is at rest (or one that moves with no rotation and at constant velocity) relative to the "fixed stars" is generally taken to be an inertial frame. Any system can be analyzed in an inertial frame (and so with no centrifugal force). However, it is often more convenient to describe a rotating system by using a rotating frame—the calculations are simpler, and descriptions more intuitive. When this choice is made, fictitious forces, including the centrifugal force, arise. In a reference frame rotating about an axis through its origin, all objects, regardless of their state of motion, appear to be under the influence of a radially (from the axis of rotation) outward force that is proportional to their mass, to the distance from the axis of rotation of the frame, and to the square of the angular velocity of the frame. This is the centrifugal force. As humans usually experience centrifugal force from within the rotating reference frame, e.g. on a merry-go-round or vehicle, this is much more well-known than centripetal force. Motion relative to a rotating frame results in another fictitious force: the Coriolis force. If the rate of rotation of the frame changes, a third fictitious force (the Euler force) is required. These fictitious forces are necessary for the formulation of correct equations of motion in a rotating reference frame and allow Newton's laws to be used in their normal form in such a frame (with one exception: the fictitious forces do not obey Newton's third law: they have no equal and opposite counterparts). Newton's third law requires the counterparts to exist within the same frame of reference, hence centrifugal and centripetal force, which do not, are not action and reaction (as is sometimes erroneously contended). == Examples == === Vehicle driving round a curve === A common experience that gives rise to the idea of a centrifugal force is encountered by passengers riding in a vehicle, such as a car, that is changing direction. If a car is traveling at a constant speed along a straight road, then a passenger inside is not accelerating and, according to Newton's second law of motion, the net force acting on them is therefore zero (all forces acting on them cancel each other out). If the car enters a curve that bends to the left, the passenger experiences an apparent force that seems to be pulling them towards the right. This is the fictitious centrifugal force. It is needed within the passengers' local frame of reference to explain their sudden tendency to start accelerating to the right relative to the car—a tendency which they must resist by applying a rightward force to the car (for instance, a frictional force against the seat) in order to remain in a fixed position inside. Since they push the seat toward the right, Newton's third law says that the seat pushes them towards the left. The centrifugal force must be included in the passenger's reference frame (in which the passenger remains at rest): it counteracts the leftward force applied to the passenger by the seat, and explains why this otherwise unbalanced force does not cause them to accelerate. However, it would be apparent to a stationary observer watching from an overpass above that the frictional force exerted on the passenger by the seat is not being balanced; it constitutes a net force to the left, causing the passenger to accelerate toward the inside of the curve, as they must in order to keep moving with the car rather than proceeding in a straight line as they otherwise would. Thus the "centrifugal force" they feel is the result of a "centrifugal tendency" caused by inertia. Similar effects are encountered in aeroplanes and roller coasters where the magnitude of the apparent force is often reported in "G's". === Stone on a string === If a stone is whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is applied by the string (gravity acts vertically). There is a net force on the stone in the horizontal plane which acts toward the center. In an inertial frame of reference, were it not for this net force acting on the stone, the stone would travel in a straight line, according to Newton's first law of motion. In order to keep the stone moving in a circular path, a centripetal force, in this case provided by the string, must be continuously applied to the stone. As soon as it is removed (for example if the string breaks) the stone moves in a straight line, as viewed from above. In this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newton's laws of motion. In a frame of reference rotating with the stone around the same axis as the stone, the stone is stationary. However, the force applied by the string is still acting on the stone. If one were to apply Newton's laws in their usual (inertial frame) form, one would conclude that the stone should accelerate in the direction of the net applied force—towards the axis of rotation—which it does not do. The centrifugal force and other fictitious forces must be included along with the real forces in order to apply Newton's laws of motion in the rotating frame. === Earth === The Earth constitutes a rotating reference frame because it rotates once every 23 hours and 56 minutes around its axis. Because the rotation is slow, the fictitious forces it produces are often small, and in everyday situations can generally be neglected. Even in calculations requiring high precision, the centrifugal force is generally not explicitly included, but rather lumped in with the gravitational force: the strength and direction of the local "gravity" at any point on the Earth's surface is actually a combination of gravitational and centrifugal forces. However, the fictitious forces can be of arbitrary size. For example, in an Earth-bound reference system (where the earth is represented as stationary), the fictitious force (the net of Coriolis and centrifugal forces) is enormous and is responsible for the Sun orbiting around the Earth. This is due to the large mass and velocity of the Sun (relative to the Earth). ==== Weight of an object at the poles and on the equator ==== If an object is weighed with a simple spring balance at one of the Earth's poles, there are two forces acting on the object: the Earth's gravity, which acts in a downward direction, and the equal and opposite restoring force in the spring, acting upward. Since the object is stationary and not accelerating, there is no net force acting on the object and the force from the spring is equal in magnitude to the force of gravity on the object. In this case, the balance shows the value of the force of gravity on the object. When the same object is weighed on the equator, the same two real forces act upon the object. However, the object is moving in a circular path as the Earth rotates and therefore experiencing a centripetal acceleration. When considered in an inertial frame (that is to say, one that is not rotating with the Earth), the non-zero acceleration means that force of gravity will not balance with the force from the spring. In order to have a net centripetal force, the magnitude of the restoring force of the spring must be less than the magnitude of force of gravity. This reduced restoring force in the spring is reflected on the scale as less weight — about 0.3% less at the equator than at the poles. In the Earth reference frame (in which the object being weighed is at rest), the object does not appear to be accelerating; however, the two real forces, gravity and the force from the spring, are the same magnitude and do not balance. The centrifugal force must be included to make the sum of the forces be zero to match the apparent lack of acceleration. Note: In fact, the observed weight difference is more — about 0.53%. Earth's gravity is a bit stronger at the poles than at the equator, because the Earth is not a perfect sphere, so an object at the poles is slightly closer to the center of the Earth than one at the equator; this effect combines with the centrifugal force to produce the observed weight difference. == Formulation == For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame. === Time derivatives in a rotating frame === In a rotating frame of reference, the time derivatives of any vector function P of time—such as the velocity and acceleration vectors of an object—will differ from its time derivatives in the stationary frame. If P1, P2, P3 are the components of P with respect to unit vectors i, j, k directed along the axes of the rotating frame (i.e. P = P1 i + P2 j +P3 k), then the first time derivative [⁠dP/dt⁠] of P with respect to the rotating frame is, by definition, ⁠dP1/dt⁠ i + ⁠dP2/dt⁠ j + ⁠dP3/dt⁠ k. If the absolute angular velocity of the rotating frame is ω then the derivative ⁠dP/dt⁠ of P with respect to the stationary frame is related to [⁠dP/dt⁠] by the equation: d P d t = [ d P d t ] + ω × P , {\displaystyle {\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}=\left[{\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {P}}\ ,} where × denotes the vector cross product. In other words, the rate of change of P in the stationary frame is the sum of its apparent rate of change in the rotating frame and a rate of rotation ω × P attributable to the motion of the rotating frame. The vector ω has magnitude ω equal to the rate of rotation and is directed along the axis of rotation according to the right-hand rule. === Acceleration === Newton's law of motion for a particle of mass m written in vector form is: F = m a , {\displaystyle {\boldsymbol {F}}=m{\boldsymbol {a}}\ ,} where F is the vector sum of the physical forces applied to the particle and a is the absolute acceleration (that is, acceleration in an inertial frame) of the particle, given by: a = d 2 r d t 2 , {\displaystyle {\boldsymbol {a}}={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\ ,} where r is the position vector of the particle (not to be confused with radius, as used above.) By applying the transformation above from the stationary to the rotating frame three times (twice to d r d t {\textstyle {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}} and once to d d t [ d r d t ] {\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]} ), the absolute acceleration of the particle can be written as: a = d 2 r d t 2 = d d t d r d t = d d t ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × d r d t = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + d ω d t × r + 2 ω × [ d r d t ] + ω × ( ω × r ) . {\displaystyle {\begin{aligned}{\boldsymbol {a}}&={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}={\frac {\mathrm {d} }{\mathrm {d} t}}\left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times \left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+2{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\ .\end{aligned}}} === Force === The apparent acceleration in the rotating frame is [ d 2 r d t 2 ] {\displaystyle \left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]} . An observer unaware of the rotation would expect this to be zero in the absence of outside forces. However, Newton's laws of motion apply only in the inertial frame and describe dynamics in terms of the absolute acceleration d 2 r d t 2 {\displaystyle {\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}} . Therefore, the observer perceives the extra terms as contributions due to fictitious forces. These terms in the apparent acceleration are independent of mass; so it appears that each of these fictitious forces, like gravity, pulls on an object in proportion to its mass. When these forces are added, the equation of motion has the form: F + ( − m d ω d t × r ) ⏟ Euler + ( − 2 m ω × [ d r d t ] ) ⏟ Coriolis + ( − m ω × ( ω × r ) ) ⏟ centrifugal = m [ d 2 r d t 2 ] . {\displaystyle {\boldsymbol {F}}+\underbrace {\left(-m{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}\right)} _{\text{Euler}}+\underbrace {\left(-2m{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]\right)} _{\text{Coriolis}}+\underbrace {\left(-m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\right)} _{\text{centrifugal}}=m\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]\ .} From the perspective of the rotating frame, the additional force terms are experienced just like the real external forces and contribute to the apparent acceleration. The additional terms on the force side of the equation can be recognized as, reading from left to right, the Euler force − m d ω / d t × r {\displaystyle -m\mathrm {d} {\boldsymbol {\omega }}/\mathrm {d} t\times {\boldsymbol {r}}} , the Coriolis force − 2 m ω × [ d r / d t ] {\displaystyle -2m{\boldsymbol {\omega }}\times \left[\mathrm {d} {\boldsymbol {r}}/\mathrm {d} t\right]} , and the centrifugal force − m ω × ( ω × r ) {\displaystyle -m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})} , respectively. Unlike the other two fictitious forces, the centrifugal force always points radially outward from the axis of rotation of the rotating frame, with magnitude m ω 2 r ⊥ {\displaystyle m\omega ^{2}r_{\perp }} , where r ⊥ {\displaystyle r_{\perp }} is the component of the position vector perpendicular to ω {\displaystyle {\boldsymbol {\omega }}} , and unlike the Coriolis force in particular, it is independent of the motion of the particle in the rotating frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\displaystyle ({\boldsymbol {\omega }}=0)} the centrifugal force and all other fictitious forces disappear. Similarly, as the centrifugal force is proportional to the distance from object to the axis of rotation of the frame, the centrifugal force vanishes for objects that lie upon the axis. === Potential === The centrifugal force per unit mass can also be derived as the gradient of a centrifugal potential. For example, the centrifugal potential at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is 0.5 ω 2 ρ 2 {\textstyle 0.5\omega ^{2}\rho ^{2}} (see also: Geopotential#Centrifugal potential.) == Absolute rotation == Three scenarios were suggested by Newton to answer the question of whether the absolute rotation of a local frame can be detected; that is, if an observer can decide whether an observed object is rotating or if the observer is rotating. The shape of the surface of water rotating in a bucket. The shape of the surface becomes concave to balance the centrifugal force against the other forces upon the liquid. The tension in a string joining two spheres rotating about their center of mass. The tension in the string will be proportional to the centrifugal force on each sphere as it rotates around the common center of mass. In these scenarios, the effects attributed to centrifugal force are only observed in the local frame (the frame in which the object is stationary) if the object is undergoing absolute rotation relative to an inertial frame. By contrast, in an inertial frame, the observed effects arise as a consequence of the inertia and the known forces without the need to introduce a centrifugal force. Based on this argument, the privileged frame, wherein the laws of physics take on the simplest form, is a stationary frame in which no fictitious forces need to be invoked. Within this view of physics, any other phenomenon that is usually attributed to centrifugal force can be used to identify absolute rotation. For example, the oblateness of a sphere of freely flowing material is often explained in terms of centrifugal force. The oblate spheroid shape reflects, following Clairaut's theorem, the balance between containment by gravitational attraction and dispersal by centrifugal force. That the Earth is itself an oblate spheroid, bulging at the equator where the radial distance and hence the centrifugal force is larger, is taken as one of the evidences for its absolute rotation. == Applications == The operations of numerous common rotating mechanical systems are most easily conceptualized in terms of centrifugal force. For example: A centrifugal governor regulates the speed of an engine by using spinning masses that move radially, adjusting the throttle, as the engine changes speed. In the reference frame of the spinning masses, centrifugal force causes the radial movement. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device but automatically and smoothly engages the drive as the engine speed rises. Inertial drum brake ascenders used in rock climbing and the inertia reels used in many automobile seat belts operate on the same principle. Centrifugal forces can be used to generate artificial gravity, as in proposed designs for rotating space stations. The Mars Gravity Biosatellite would have studied the effects of Mars-level gravity on mice with gravity simulated in this way. Spin casting and centrifugal casting are production methods that use centrifugal force to disperse liquid metal or plastic throughout the negative space of a mold. Centrifuges are used in science and industry to separate substances. In the reference frame spinning with the centrifuge, the centrifugal force induces a hydrostatic pressure gradient in fluid-filled tubes oriented perpendicular to the axis of rotation, giving rise to large buoyant forces which push low-density particles inward. Elements or particles denser than the fluid move outward under the influence of the centrifugal force. This is effectively Archimedes' principle as generated by centrifugal force as opposed to being generated by gravity. Some amusement rides make use of centrifugal forces. For instance, a Gravitron's spin forces riders against a wall and allows riders to be elevated above the machine's floor in defiance of Earth's gravity. Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in a stationary frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system. == Other uses of the term == While the majority of the scientific literature uses the term centrifugal force to refer to the particular fictitious force that arises in rotating frames, there are a few limited instances in the literature of the term applied to other distinct physical concepts. === In Lagrangian mechanics === One of these instances occurs in Lagrangian mechanics. Lagrangian mechanics formulates mechanics in terms of generalized coordinates {qk}, which can be as simple as the usual polar coordinates ( r , θ ) {\displaystyle (r,\ \theta )} or a much more extensive list of variables. Within this formulation the motion is described in terms of generalized forces, using in place of Newton's laws the Euler–Lagrange equations. Among the generalized forces, those involving the square of the time derivatives {(dqk  ⁄ dt )2} are sometimes called centrifugal forces. In the case of motion in a central potential the Lagrangian centrifugal force has the same form as the fictitious centrifugal force derived in a co-rotating frame. However, the Lagrangian use of "centrifugal force" in other, more general cases has only a limited connection to the Newtonian definition. === As a reactive force === In another instance the term refers to the reaction force to a centripetal force, or reactive centrifugal force. A body undergoing curved motion, such as circular motion, is accelerating toward a center at any particular point in time. This centripetal acceleration is provided by a centripetal force, which is exerted on the body in curved motion by some other body. In accordance with Newton's third law of motion, the body in curved motion exerts an equal and opposite force on the other body. This reactive force is exerted by the body in curved motion on the other body that provides the centripetal force and its direction is from that other body toward the body in curved motion. This reaction force is sometimes described as a centrifugal inertial reaction, that is, a force that is centrifugally directed, which is a reactive force equal and opposite to the centripetal force that is curving the path of the mass. The concept of the reactive centrifugal force is sometimes used in mechanics and engineering. It is sometimes referred to as just centrifugal force rather than as reactive centrifugal force although this usage is deprecated in elementary mechanics. == See also == Balancing of rotating masses Centrifugal mechanism of acceleration Equivalence principle Folk physics Lagrangian point Lamm equation == Notes == == References == == External links == Media related to Centrifugal force at Wikimedia Commons
Wikipedia/Centrifugal_force_(fictitious)
Classical Mechanics is a textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. The book covers the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics. == Publication history == The first edition was published in 1966, and until the fourth edition (published in 1996), Thomas Kibble was the sole author. The book has been translated into several languages, including French, Greek, German, Turkish, Spanish and Portuguese. == Reception == The original edition was reviewed in Current Science. The fourth edition was reviewed by C. Isenberg in 1997 in the European Journal of Physics, and the fifth edition was reviewed in Contemporary Physics. == See also == Newtonian mechanics Classical Mechanics (Goldstein book) List of textbooks on classical and quantum mechanics == References == == External links == Official website doi:10.1142/p310
Wikipedia/Classical_Mechanics_(Kibble_and_Berkshire)
In mathematics and physics, vector is a term that refers to quantities that cannot be expressed by a single number (a scalar), or to elements of some vector spaces. Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers. The term vector is also used, in some contexts, for tuples, which are finite sequences (of numbers or other objects) of a fixed length. Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on the above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space. Many vector spaces are considered in mathematics, such as extension fields, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vector spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces). == Vectors in Euclidean geometry == == Vector quantities == == Vector spaces == == Vectors in algebra == Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. Vector quaternion, a quaternion with a zero real part Multivector or p-vector, an element of the exterior algebra of a vector space. Spinors, also called spin vectors, have been introduced for extending the notion of rotation vector. In fact, rotation vectors represent well rotations locally, but not globally, because a closed loop in the space of rotation vectors may induce a curve in the space of rotations that is not a loop. Also, the manifold of rotation vectors is orientable, while the manifold of rotations is not. Spinors are elements of a vector subspace of some Clifford algebra. Witt vector, an infinite sequence of elements of a commutative ring, which belongs to an algebra over this ring, and has been introduced for handling carry propagation in the operations on p-adic numbers. == Data represented by vectors == The set R n {\displaystyle \mathbb {R} ^{n}} of tuples of n real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. It is common to call these tuples vectors, even in contexts where vector-space operations do not apply. More generally, when some data can be represented naturally by vectors, they are often called vectors even when addition and scalar multiplication of vectors are not valid operations on these data. Here are some examples. Rotation vector, a Euclidean vector whose direction is that of the axis of a rotation and magnitude is the angle of the rotation. Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set Probability vector, in statistics, a vector with non-negative entries that sum to one. Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated. However, a random vector may also refer to a random variable that takes its values in a vector space. Logical vector, a vector of 0s and 1s (Booleans). == Vectors in calculus == Calculus serves as a foundational mathematical tool in the realm of vectors, offering a framework for the analysis and manipulation of vector quantities in diverse scientific disciplines, notably physics and engineering. Vector-valued functions, where the output is a vector, are scrutinized using calculus to derive essential insights into motion within three-dimensional space. Vector calculus extends traditional calculus principles to vector fields, introducing operations like gradient, divergence, and curl, which find applications in physics and engineering contexts. Line integrals, crucial for calculating work along a path within force fields, and surface integrals, employed to determine quantities like flux, illustrate the practical utility of calculus in vector analysis. Volume integrals, essential for computations involving scalar or vector fields over three-dimensional regions, contribute to understanding mass distribution, charge density, and fluid flow rates. == See also == Vector (disambiguation) === Vector spaces with more structure === Graded vector space, a type of vector space that includes the extra structure of gradation Normed vector space, a vector space on which a norm is defined Hilbert space Ordered vector space, a vector space equipped with a partial order Super vector space, name for a Z2-graded vector space Symplectic vector space, a vector space V equipped with a non-degenerate, skew-symmetric, bilinear form Topological vector space, a blend of topological structure with the algebraic concept of a vector space === Vector fields === A vector field is a vector-valued function that, generally, has a domain of the same dimension (as a manifold) as its codomain, Conservative vector field, a vector field that is the gradient of a scalar potential field Hamiltonian vector field, a vector field defined for any energy function or Hamiltonian Killing vector field, a vector field on a Riemannian manifold associated with a symmetry Solenoidal vector field, a vector field with zero divergence Vector potential, a vector field whose curl is a given vector field Vector flow, a set of closely related concepts of the flow determined by a vector field === See also === Ricci calculus Vector Analysis, a textbook on vector calculus by Wilson, first published in 1901, which did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus Vector bundle, a topological construction that makes precise the idea of a family of vector spaces parameterized by another space Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields Vector differential, or del, a vector differential operator represented by the nabla symbol ∇ {\displaystyle \nabla } Vector Laplacian, the vector Laplace operator, denoted by ∇ 2 {\displaystyle \nabla ^{2}} , is a differential operator defined over a vector field Vector notation, common notation used when working with vectors Vector operator, a type of differential operator used in vector calculus Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector perpendicular to the original two Vector projection, also known as vector resolute or vector component, a linear mapping producing a vector parallel to a second vector Vector-valued function, a function that has a vector space as a codomain Vectorization (mathematics), a linear transformation that converts a matrix into a column vector Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series Vector boson, a boson with the spin quantum number equal to 1 Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties Vector meson, a meson with total spin 1 and odd parity Vector quantization, a quantization technique used in signal processing Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation Vector synthesis, a type of audio synthesis Phase vector == Notes == == References == Vectors - The Feynman Lectures on Physics Heinbockel, J. H. (2001). Introduction to Tensor Calculus and Continuum Mechanics. Trafford Publishing. ISBN 1-55369-133-4. Itô, Kiyosi (1993). Encyclopedic Dictionary of Mathematics (2nd ed.). MIT Press. ISBN 978-0-262-59020-4. Ivanov, A.B. (2001) [1994], "Vector", Encyclopedia of Mathematics, EMS Press Pedoe, Daniel (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0.
Wikipedia/Vector_(physics)
Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows: A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force. At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time. If two bodies exert forces on each other, these forces have the same magnitude but opposite directions. The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical objects and systems. In the time since Newton, new insights, especially around the concept of energy, built the field of classical mechanics on his foundations. Limitations to Newton's laws have also been discovered; new theories are necessary when objects move at very high speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics). == Prerequisites == Newton's laws are often stated in terms of point or particle masses, that is, bodies whose volume is negligible. This is a reasonable approximation for real bodies when the motion of internal parts can be neglected, and when the separation between bodies is much larger than the size of each. For instance, the Earth and the Sun can both be approximated as pointlike when considering the orbit of the former around the latter, but the Earth is not pointlike when considering activities on its surface. The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is s ( t ) {\displaystyle s(t)} , then its average velocity over the time interval from t 0 {\displaystyle t_{0}} to t 1 {\displaystyle t_{1}} is Δ s Δ t = s ( t 1 ) − s ( t 0 ) t 1 − t 0 . {\displaystyle {\frac {\Delta s}{\Delta t}}={\frac {s(t_{1})-s(t_{0})}{t_{1}-t_{0}}}.} Here, the Greek letter Δ {\displaystyle \Delta } (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate s {\displaystyle s} increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an instantaneous velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace Δ {\displaystyle \Delta } with the symbol d {\displaystyle d} , for example, v = d s d t . {\displaystyle v={\frac {ds}{dt}}.} This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position d s {\displaystyle ds} to the infinitesimally small time interval d t {\displaystyle dt} over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function f ( t ) {\displaystyle f(t)} has a limit of L {\displaystyle L} at a given input value t 0 {\displaystyle t_{0}} if the difference between f {\displaystyle f} and L {\displaystyle L} can be made arbitrarily small by choosing an input sufficiently close to t 0 {\displaystyle t_{0}} . One writes, lim t → t 0 f ( t ) = L . {\displaystyle \lim _{t\to t_{0}}f(t)=L.} Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero: d s d t = lim Δ t → 0 s ( t + Δ t ) − s ( t ) Δ t . {\displaystyle {\frac {ds}{dt}}=\lim _{\Delta t\to 0}{\frac {s(t+\Delta t)-s(t)}{\Delta t}}.} Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit: a = d v d t = lim Δ t → 0 v ( t + Δ t ) − v ( t ) Δ t . {\displaystyle a={\frac {dv}{dt}}=\lim _{\Delta t\to 0}{\frac {v(t+\Delta t)-v(t)}{\Delta t}}.} Consequently, the acceleration is the second derivative of position, often written d 2 s d t 2 {\displaystyle {\frac {d^{2}s}{dt^{2}}}} . Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction.: 1  Velocity and acceleration are vector quantities as well. The mathematical tools of vector algebra provide the means to describe motion in two, three or more dimensions. Vectors are often denoted with an arrow, as in s → {\displaystyle {\vec {s}}} , or in bold typeface, such as s {\displaystyle {\bf {s}}} . Often, vectors are represented visually as arrows, with the direction of the vector being the direction of the arrow, and the magnitude of the vector indicated by the length of the arrow. Numerically, a vector can be represented as a list; for example, a body's velocity vector might be v = ( 3 m / s , 4 m / s ) {\displaystyle \mathbf {v} =(\mathrm {3~m/s} ,\mathrm {4~m/s} )} , indicating that it is moving at 3 metres per second along the horizontal axis and 4 metres per second along the vertical axis. The same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives.: 4  The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight.: 150  The physics concept of force makes quantitative the everyday idea of a push or a pull. Forces in Newtonian mechanics are often due to strings and ropes, friction, muscle effort, gravity, and so forth. Like displacement, velocity, and acceleration, force is a vector quantity. == Laws == === First law === Translated from Latin, Newton's first law reads, Every object perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. A body's motion preserves the status quo, but external forces can perturb this. The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest.: 62–63 : 7–9  Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative. === Second law === The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed.: 114  By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity: p = m v , {\displaystyle \mathbf {p} =m\mathbf {v} \,,} where all three quantities can change over time. In common cases the mass m {\displaystyle m} does not change with time and the derivative acts only upon the velocity. Then force equals the product of the mass and the time derivative of the velocity, which is the acceleration: F = m d v d t = m a . {\displaystyle \mathbf {F} =m{\frac {d\mathbf {v} }{dt}}=m\mathbf {a} \,.} As the acceleration is the second derivative of position with respect to time, this can also be written F = m d 2 s d t 2 . {\displaystyle \mathbf {F} =m{\frac {d^{2}\mathbf {s} }{dt^{2}}}.} Newton's second law, in modern form, states that the time derivative of the momentum is the force:: 4.1  F = d p d t . {\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}\,.} When applied to systems of variable mass, the equation above is only valid only for a fixed set of particles. Applying the derivative as in F = m d v d t + v d m d t ( i n c o r r e c t ) {\displaystyle \mathbf {F} =m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}+\mathbf {v} {\frac {\mathrm {d} m}{\mathrm {d} t}}\ \ \mathrm {(incorrect)} } can lead to incorrect results. For example, the momentum of a water jet system must include the momentum of the ejected water: F e x t = d p d t − v e j e c t d m d t . {\displaystyle \mathbf {F} _{\mathrm {ext} }={\mathrm {d} \mathbf {p} \over \mathrm {d} t}-\mathbf {v} _{\mathrm {eject} }{\frac {\mathrm {d} m}{\mathrm {d} t}}.} The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces.: 58  When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.: 121 : 174  A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension. Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. This is sometimes regarded as a potential tautology — acceleration implies force, force implies acceleration. However, Newton's second law not only merely defines the force by the acceleration: forces exist as separate from the acceleration produced by the force in a particular system. The same force that is identified as producing acceleration to an object can then be applied to any other object, and the resulting accelerations (coming from that same force) will always be inversely proportional to the mass of the object. What Newton's Second Law states is that all the effect of a force onto a system can be reduced to two pieces of information: the magnitude of the force, and it's direction, and then goes on to specify what the effect is. Beyond that, an equation detailing the force might also be specified, like Newton's law of universal gravitation. By inserting such an expression for F {\displaystyle \mathbf {F} } into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter.: 134 : 12-2  However, forces can often be measured directly with no acceleration being involved, such as through weighing scales. By postulating a physical object that can be directly measured independently from acceleration, Newton made a objective physical statement with the second law alone, the predictions of which can be verified even if no force law is given. === Third law === To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.: 116  In other words, if one body exerts a force on a second body, the second body is also exerting a force on the first body, of equal magnitude in the opposite direction. Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth. Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta p 1 {\displaystyle \mathbf {p} _{1}} and p 2 {\displaystyle \mathbf {p} _{2}} respectively, then the total momentum of the pair is p = p 1 + p 2 {\displaystyle \mathbf {p} =\mathbf {p} _{1}+\mathbf {p} _{2}} , and the rate of change of p {\displaystyle \mathbf {p} } is d p d t = d p 1 d t + d p 2 d t . {\displaystyle {\frac {d\mathbf {p} }{dt}}={\frac {d\mathbf {p} _{1}}{dt}}+{\frac {d\mathbf {p} _{2}}{dt}}.} By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and p {\displaystyle \mathbf {p} } is constant. Alternatively, if p {\displaystyle \mathbf {p} } is known to be constant, it follows that the forces have equal magnitude and opposite direction. === Candidates for additional laws === Various sources have proposed elevating other ideas used in classical mechanics to the status of Newton's laws. For example, in Newtonian mechanics, the total mass of a body made by bringing together two smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law". Moreover, some texts organize the basic ideas of Newtonian mechanics into different postulates, other than the three laws as commonly phrased, with the goal of being more clear about what is empirically observed and what is true by definition.: 9  === Examples === The study of the behavior of massive bodies using Newton's laws is known as Newtonian mechanics. Some example problems in Newtonian mechanics are particularly noteworthy for conceptual or historical reasons. ==== Uniformly accelerated motion ==== If a body falls from rest near the surface of the Earth, then in the absence of air resistance, it will accelerate at a constant rate. This is known as free fall. The speed attained during free fall is proportional to the elapsed time, and the distance traveled is proportional to the square of the elapsed time. Importantly, the acceleration is the same for all bodies, independently of their mass. This follows from combining Newton's second law of motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is F = G M m r 2 , {\displaystyle F={\frac {GMm}{r^{2}}},} where m {\displaystyle m} is the mass of the falling body, M {\displaystyle M} is the mass of the Earth, G {\displaystyle G} is Newton's constant, and r {\displaystyle r} is the distance from the center of the Earth to the body's location, which is very nearly the radius of the Earth. Setting this equal to m a {\displaystyle ma} , the body's mass m {\displaystyle m} cancels from both sides of the equation, leaving an acceleration that depends upon G {\displaystyle G} , M {\displaystyle M} , and r {\displaystyle r} , and r {\displaystyle r} can be taken to be constant. This particular value of acceleration is typically denoted g {\displaystyle g} : g = G M r 2 ≈ 9.8 m / s 2 . {\displaystyle g={\frac {GM}{r^{2}}}\approx \mathrm {9.8~m/s^{2}} .} If the body is not released from rest but instead launched upwards and/or horizontally with nonzero velocity, then free fall becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped trajectories, because gravity affects the body's vertical motion and not its horizontal. At the peak of the projectile's trajectory, its vertical velocity is zero, but its acceleration is g {\displaystyle g} downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students. ==== Uniform circular motion ==== When a body is in uniform circular motion, the force on it changes the direction of its motion but not its speed. For a body moving in a circle of radius r {\displaystyle r} at a constant speed v {\displaystyle v} , its acceleration has a magnitude a = v 2 r {\displaystyle a={\frac {v^{2}}{r}}} and is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude m v 2 / r {\displaystyle mv^{2}/r} . Many orbits, such as that of the Moon around the Earth, can be approximated by uniform circular motion. In such cases, the centripetal force is gravity, and by Newton's law of universal gravitation has magnitude G M m / r 2 {\displaystyle GMm/r^{2}} , where M {\displaystyle M} is the mass of the larger body being orbited. Therefore, the mass of a body can be calculated from observations of another body orbiting around it.: 130  Newton's cannonball is a thought experiment that interpolates between projectile motion and uniform circular motion. A cannonball that is lobbed weakly off the edge of a tall cliff will hit the ground in the same amount of time as if it were dropped from rest, because the force of gravity only affects the cannonball's momentum in the downward direction, and its effect is not diminished by horizontal movement. If the cannonball is launched with a greater initial horizontal velocity, then it will travel farther before it hits the ground, but it will still hit the ground in the same amount of time. However, if the cannonball is launched with an even larger initial velocity, then the curvature of the Earth becomes significant: the ground itself will curve away from the falling cannonball. A very fast cannonball will fall away from the inertial straight-line trajectory at the same rate that the Earth curves away beneath it; in other words, it will be in orbit (imagining that it is not slowed by air resistance or obstacles). ==== Harmonic motion ==== Consider a body of mass m {\displaystyle m} able to move along the x {\displaystyle x} axis, and suppose an equilibrium point exists at the position x = 0 {\displaystyle x=0} . That is, at x = 0 {\displaystyle x=0} , the net force upon the body is the zero vector, and by Newton's second law, the body will not accelerate. If the force upon the body is proportional to the displacement from the equilibrium point, and directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as F = − k x {\displaystyle F=-kx} , Newton's second law becomes m d 2 x d t 2 = − k x . {\displaystyle m{\frac {d^{2}x}{dt^{2}}}=-kx\,.} This differential equation has the solution x ( t ) = A cos ⁡ ω t + B sin ⁡ ω t {\displaystyle x(t)=A\cos \omega t+B\sin \omega t\,} where the frequency ω {\displaystyle \omega } is equal to k / m {\displaystyle {\sqrt {k/m}}} , and the constants A {\displaystyle A} and B {\displaystyle B} can be calculated knowing, for example, the position and velocity the body has at a given time, like t = 0 {\displaystyle t=0} . One reason that the harmonic oscillator is a conceptually important example is that it is good approximation for many systems near a stable mechanical equilibrium. For example, a pendulum has a stable equilibrium in the vertical position: if motionless there, it will remain there, and if pushed slightly, it will swing back and forth. Neglecting air resistance and friction in the pivot, the force upon the pendulum is gravity, and Newton's second law becomes d 2 θ d t 2 = − g L sin ⁡ θ , {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}=-{\frac {g}{L}}\sin \theta ,} where L {\displaystyle L} is the length of the pendulum and θ {\displaystyle \theta } is its angle from the vertical. When the angle θ {\displaystyle \theta } is small, the sine of θ {\displaystyle \theta } is nearly equal to θ {\displaystyle \theta } (see small-angle approximation), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency ω = g / L {\displaystyle \omega ={\sqrt {g/L}}} . A harmonic oscillator can be damped, often by friction or viscous drag, in which case energy bleeds out of the oscillator and the amplitude of the oscillations decreases over time. Also, a harmonic oscillator can be driven by an applied force, which can lead to the phenomenon of resonance. ==== Objects with variable mass ==== Newtonian physics treats matter as being neither created nor destroyed, though it may be rearranged. It can be the case that an object of interest gains or loses mass because matter is added to or removed from it. In such a situation, Newton's laws can be applied to the individual pieces of matter, keeping track of which pieces belong to the object of interest over time. For instance, if a rocket of mass M ( t ) {\displaystyle M(t)} , moving at velocity v ( t ) {\displaystyle \mathbf {v} (t)} , ejects matter at a velocity u {\displaystyle \mathbf {u} } relative to the rocket, then F = M d v d t − u d M d t {\displaystyle \mathbf {F} =M{\frac {d\mathbf {v} }{dt}}-\mathbf {u} {\frac {dM}{dt}}\,} where F {\displaystyle \mathbf {F} } is the net external force (e.g., a planet's gravitational pull).: 139  ==== Fan and sail ==== The fan and sail example is a situation studied in discussions of Newton's third law. In the situation, a fan is attached to a cart or a sailboat and blows on its sail. From the third law, one would reason that the force of the air pushing in one direction would cancel out the force done by the fan on the sail, leaving the entire apparatus stationary. However, because the system is not entirely enclosed, there are conditions in which the vessel will move; for example, if the sail is built in a manner that redirects the majority of the airflow back towards the fan, the net force will result in the vessel moving forward. == Work and energy == The concept of energy was developed after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body moves in a closed loop — starting at a point, moving along some trajectory, and returning to the initial point — is zero. If this is the case, then the force can be written in terms of the gradient of a function called a scalar potential:: 303  F = − ∇ U . {\displaystyle \mathbf {F} =-\mathbf {\nabla } U\,.} This is true for many forces including that of gravity, but not for friction; indeed, almost any problem in a mechanics textbook that does not involve friction can be expressed in this way.: 19  The fact that the force can be written in this way can be understood from the conservation of energy. Without friction to dissipate a body's energy into heat, the body's energy will trade between potential and (non-thermal) kinetic forms while the total amount remains constant. Any gain of kinetic energy, which occurs when the net force on the body accelerates it to a higher speed, must be accompanied by a loss of potential energy. So, the net force upon the body is determined by the manner in which the potential energy decreases. == Rigid-body motion and rotation == A rigid body is an object whose size is too large to neglect and which maintains the same shape over time. In Newtonian mechanics, the motion of a rigid body is often understood by separating it into movement of the body's center of mass and movement around the center of mass. === Center of mass === Significant aspects of the motion of an extended body can be understood by imagining the mass of that body concentrated to a single point, known as the center of mass. The location of a body's center of mass depends upon how that body's material is distributed. For a collection of pointlike objects with masses m 1 , … , m N {\displaystyle m_{1},\ldots ,m_{N}} at positions r 1 , … , r N {\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{N}} , the center of mass is located at R = ∑ i = 1 N m i r i M , {\displaystyle \mathbf {R} =\sum _{i=1}^{N}{\frac {m_{i}\mathbf {r} _{i}}{M}},} where M {\displaystyle M} is the total mass of the collection. In the absence of a net external force, the center of mass moves at a constant speed in a straight line. This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass M {\displaystyle M} . This follows from the fact that the internal forces within the collection, the forces that the objects exert upon each other, occur in balanced pairs by Newton's third law. In a system of two bodies with one much more massive than the other, the center of mass will approximately coincide with the location of the more massive body.: 22–24  === Rotational analogues of Newton's laws === When Newton's laws are applied to rotating extended bodies, they lead to new quantities that are analogous to those invoked in the original laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque. Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is r {\displaystyle \mathbf {r} } and the body has momentum p {\displaystyle \mathbf {p} } , then the body's angular momentum with respect to that point is, using the vector cross product, L = r × p . {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} .} Taking the time derivative of the angular momentum gives d L d t = ( d r d t ) × p + r × d p d t = v × m v + r × F . {\displaystyle {\frac {d\mathbf {L} }{dt}}=\left({\frac {d\mathbf {r} }{dt}}\right)\times \mathbf {p} +\mathbf {r} \times {\frac {d\mathbf {p} }{dt}}=\mathbf {v} \times m\mathbf {v} +\mathbf {r} \times \mathbf {F} .} The first term vanishes because v {\displaystyle \mathbf {v} } and m v {\displaystyle m\mathbf {v} } point in the same direction. The remaining term is the torque, τ = r × F . {\displaystyle \mathbf {\tau } =\mathbf {r} \times \mathbf {F} .} When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant.: 14–15  The torque can vanish even when the force is non-zero, if the body is located at the reference point ( r = 0 {\displaystyle \mathbf {r} =0} ) or if the force F {\displaystyle \mathbf {F} } and the displacement vector r {\displaystyle \mathbf {r} } are directed along the same line. The angular momentum of a collection of point masses, and thus of an extended body, is found by adding the contributions from each of the points. This provides a means to characterize a body's rotation about an axis, by adding up the angular momenta of its individual pieces. The result depends on the chosen axis, the shape of the body, and the rate of rotation.: 28  === Multi-body gravitational system === Newton's law of universal gravitation states that any body attracts any other body along the straight line connecting them. The size of the attracting force is proportional to the product of their masses, and inversely proportional to the square of the distance between them. Finding the shape of the orbits that an inverse-square force law will produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity of the orbit, and thus the type of conic section, is determined by the energy and the angular momentum of the orbiting body. Planets do not have sufficient energy to escape the Sun, and so their orbits are ellipses, to a good approximation; because the planets pull on one another, actual orbits are not exactly conic sections. If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables within a computer's memory; Newton's laws are used to calculate how the velocities will change over a short interval of time, and knowing the velocities, the changes of position over that time interval can be computed. This process is looped to calculate, approximately, the bodies' trajectories. Generally speaking, the shorter the time interval, the more accurate the approximation. == Chaos and unpredictability == === Nonlinear dynamics === Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws can exhibit sensitive dependence upon their initial conditions: a slight change of the position or velocity of one part of a system can lead to the whole system behaving in a radically different way within a short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem. Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function v ( x , t ) {\displaystyle \mathbf {v} (\mathbf {x} ,t)} that assigns a velocity vector to each point in space and time. A small object being carried along by the fluid flow can change velocity for two reasons: first, because the velocity field at its position is changing over time, and second, because it moves to a new location where the velocity field has a different value. Consequently, when Newton's second law is applied to an infinitesimal portion of fluid, the acceleration a {\displaystyle \mathbf {a} } has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, a = F / m {\displaystyle \mathbf {a} =\mathbf {F} /m} becomes ∂ v ∂ t + ( ∇ ⋅ v ) v = − 1 ρ ∇ P + f , {\displaystyle {\frac {\partial v}{\partial t}}+(\mathbf {\nabla } \cdot \mathbf {v} )\mathbf {v} =-{\frac {1}{\rho }}\mathbf {\nabla } P+\mathbf {f} ,} where ρ {\displaystyle \rho } is the density, P {\displaystyle P} is the pressure, and f {\displaystyle \mathbf {f} } stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation: ∂ v ∂ t + ( ∇ ⋅ v ) v = − 1 ρ ∇ P + ν ∇ 2 v + f , {\displaystyle {\frac {\partial v}{\partial t}}+(\mathbf {\nabla } \cdot \mathbf {v} )\mathbf {v} =-{\frac {1}{\rho }}\mathbf {\nabla } P+\nu \nabla ^{2}\mathbf {v} +\mathbf {f} ,} where ν {\displaystyle \nu } is the kinematic viscosity. === Singularities === It is mathematically possible for a collection of point masses, moving in accord with Newton's laws, to launch some of themselves away so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics. It is not yet known whether or not the Euler and Navier–Stokes equations exhibit the analogous behavior of initially smooth solutions "blowing up" in finite time. The question of existence and smoothness of Navier–Stokes solutions is one of the Millennium Prize Problems. == Relation to other formulations of classical physics == Classical mechanics can be mathematically formulated in multiple different ways, other than the "Newtonian" description (which itself, of course, incorporates contributions from others both before and after Newton). The physical content of these different formulations is the same as the Newtonian, but they provide different insights and facilitate different types of calculations. For example, Lagrangian mechanics helps make apparent the connection between symmetries and conservation laws, and it is useful when calculating the motion of constrained bodies, like a mass restricted to move along a curving track or on the surface of a sphere.: 48  Hamiltonian mechanics is convenient for statistical physics,: 57  leads to further insight about symmetry,: 251  and can be developed into sophisticated techniques for perturbation theory.: 284  Due to the breadth of these topics, the discussion here will be confined to concise treatments of how they reformulate Newton's laws of motion. === Lagrangian === Lagrangian mechanics differs from the Newtonian formulation by considering entire trajectories at once rather than predicting a body's motion at a single instant.: 109  It is traditional in Lagrangian mechanics to denote position with q {\displaystyle q} and velocity with q ˙ {\displaystyle {\dot {q}}} . The simplest example is a massive point particle, the Lagrangian for which can be written as the difference between its kinetic and potential energies: L ( q , q ˙ ) = T − V , {\displaystyle L(q,{\dot {q}})=T-V,} where the kinetic energy is T = 1 2 m q ˙ 2 {\displaystyle T={\frac {1}{2}}m{\dot {q}}^{2}} and the potential energy is some function of the position, V ( q ) {\displaystyle V(q)} . The physical path that the particle will take between an initial point q i {\displaystyle q_{i}} and a final point q f {\displaystyle q_{f}} is the path for which the integral of the Lagrangian is "stationary". That is, the physical path has the property that small perturbations of it will, to a first approximation, not change the integral of the Lagrangian. Calculus of variations provides the mathematical tools for finding this path.: 485  Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle, d d t ( ∂ L ∂ q ˙ ) = ∂ L ∂ q . {\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {q}}}}\right)={\frac {\partial L}{\partial q}}.} Evaluating the partial derivatives of the Lagrangian gives d d t ( m q ˙ ) = − d V d q , {\displaystyle {\frac {d}{dt}}(m{\dot {q}})=-{\frac {dV}{dq}},} which is a restatement of Newton's second law. The left-hand side is the time derivative of the momentum, and the right-hand side is the force, represented in terms of the potential energy.: 737  Landau and Lifshitz argue that the Lagrangian formulation makes the conceptual content of classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's theorem to a Lagrangian for a multi-particle system, and so, Newton's third law is a theorem rather than an assumption.: 124  === Hamiltonian === In Hamiltonian mechanics, the dynamics of a system are represented by a function called the Hamiltonian, which in many cases of interest is equal to the total energy of the system.: 742  The Hamiltonian is a function of the positions and the momenta of all the bodies making up the system, and it may also depend explicitly upon time. The time derivatives of the position and momentum variables are given by partial derivatives of the Hamiltonian, via Hamilton's equations.: 203  The simplest example is a point mass m {\displaystyle m} constrained to move in a straight line, under the effect of a potential. Writing q {\displaystyle q} for the position coordinate and p {\displaystyle p} for the body's momentum, the Hamiltonian is H ( p , q ) = p 2 2 m + V ( q ) . {\displaystyle {\mathcal {H}}(p,q)={\frac {p^{2}}{2m}}+V(q).} In this example, Hamilton's equations are d q d t = ∂ H ∂ p {\displaystyle {\frac {dq}{dt}}={\frac {\partial {\mathcal {H}}}{\partial p}}} and d p d t = − ∂ H ∂ q . {\displaystyle {\frac {dp}{dt}}=-{\frac {\partial {\mathcal {H}}}{\partial q}}.} Evaluating these partial derivatives, the former equation becomes d q d t = p m , {\displaystyle {\frac {dq}{dt}}={\frac {p}{m}},} which reproduces the familiar statement that a body's momentum is the product of its mass and velocity. The time derivative of the momentum is d p d t = − d V d q , {\displaystyle {\frac {dp}{dt}}=-{\frac {dV}{dq}},} which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.: 742  As in the Lagrangian formulation, in Hamiltonian mechanics the conservation of momentum can be derived using Noether's theorem, making Newton's third law an idea that is deduced rather than assumed.: 251  Among the proposals to reform the standard introductory-physics curriculum is one that teaches the concept of energy before that of force, essentially "introductory Hamiltonian mechanics". === Hamilton–Jacobi === The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics.: 284  This formulation also uses Hamiltonian functions, but in a different way than the formulation described above. The paths taken by bodies or collections of bodies are deduced from a function S ( q 1 , q 2 , … , t ) {\displaystyle S(\mathbf {q} _{1},\mathbf {q} _{2},\ldots ,t)} of positions q i {\displaystyle \mathbf {q} _{i}} and time t {\displaystyle t} . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for S {\displaystyle S} . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant S {\displaystyle S} , analogously to how a light ray propagates in the direction perpendicular to its wavefront. This is simplest to express for the case of a single point mass, in which S {\displaystyle S} is a function S ( q , t ) {\displaystyle S(\mathbf {q} ,t)} , and the point mass moves in the direction along which S {\displaystyle S} changes most steeply. In other words, the momentum of the point mass is the gradient of S {\displaystyle S} : v = 1 m ∇ S . {\displaystyle \mathbf {v} ={\frac {1}{m}}\mathbf {\nabla } S.} The Hamilton–Jacobi equation for a point mass is − ∂ S ∂ t = H ( q , ∇ S , t ) . {\displaystyle -{\frac {\partial S}{\partial t}}=H\left(\mathbf {q} ,\mathbf {\nabla } S,t\right).} The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential V ( q ) {\displaystyle V(\mathbf {q} )} , in which case the Hamilton–Jacobi equation becomes − ∂ S ∂ t = 1 2 m ( ∇ S ) 2 + V ( q ) . {\displaystyle -{\frac {\partial S}{\partial t}}={\frac {1}{2m}}\left(\mathbf {\nabla } S\right)^{2}+V(\mathbf {q} ).} Taking the gradient of both sides, this becomes − ∇ ∂ S ∂ t = 1 2 m ∇ ( ∇ S ) 2 + ∇ V . {\displaystyle -\mathbf {\nabla } {\frac {\partial S}{\partial t}}={\frac {1}{2m}}\mathbf {\nabla } \left(\mathbf {\nabla } S\right)^{2}+\mathbf {\nabla } V.} Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side, − ∂ ∂ t ∇ S = 1 m ( ∇ S ⋅ ∇ ) ∇ S + ∇ V . {\displaystyle -{\frac {\partial }{\partial t}}\mathbf {\nabla } S={\frac {1}{m}}\left(\mathbf {\nabla } S\cdot \mathbf {\nabla } \right)\mathbf {\nabla } S+\mathbf {\nabla } V.} Gathering together the terms that depend upon the gradient of S {\displaystyle S} , [ ∂ ∂ t + 1 m ( ∇ S ⋅ ∇ ) ] ∇ S = − ∇ V . {\displaystyle \left[{\frac {\partial }{\partial t}}+{\frac {1}{m}}\left(\mathbf {\nabla } S\cdot \mathbf {\nabla } \right)\right]\mathbf {\nabla } S=-\mathbf {\nabla } V.} This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated changes over time at a fixed location, and the second term captures how a moving particle will see different values of that function as it travels from place to place: [ ∂ ∂ t + 1 m ( ∇ S ⋅ ∇ ) ] = [ ∂ ∂ t + v ⋅ ∇ ] = d d t . {\displaystyle \left[{\frac {\partial }{\partial t}}+{\frac {1}{m}}\left(\mathbf {\nabla } S\cdot \mathbf {\nabla } \right)\right]=\left[{\frac {\partial }{\partial t}}+\mathbf {v} \cdot \mathbf {\nabla } \right]={\frac {d}{dt}}.} == Relation to other physical theories == === Thermodynamics and statistical physics === In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum.: 62  The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones.: 235  It can be written m a = − γ v + ξ {\displaystyle m\mathbf {a} =-\gamma \mathbf {v} +\mathbf {\xi } \,} where γ {\displaystyle \gamma } is a drag coefficient and ξ {\displaystyle \mathbf {\xi } } is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion. === Electromagnetism === Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist. Coulomb's law for the electric force between two stationary, electrically charged bodies has much the same mathematical form as Newton's law of universal gravitation: the force is proportional to the product of the charges, inversely proportional to the square of the distance between them, and directed along the straight line between them. The Coulomb force that a charge q 1 {\displaystyle q_{1}} exerts upon a charge q 2 {\displaystyle q_{2}} is equal in magnitude to the force that q 2 {\displaystyle q_{2}} exerts upon q 1 {\displaystyle q_{1}} , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law. Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law provides an expression for the force upon a charged body that can be plugged into Newton's second law in order to calculate its acceleration.: 85  According to the Lorentz force law, a charged body in an electric field experiences a force in the direction of that field, a force proportional to its charge q {\displaystyle q} and to the strength of the electric field. In addition, a moving charged body in a magnetic field experiences a force that is also proportional to its charge, in a direction perpendicular to both the field and the body's direction of motion. Using the vector cross product, F = q E + q v × B . {\displaystyle \mathbf {F} =q\mathbf {E} +q\mathbf {v} \times \mathbf {B} .} If the electric field vanishes ( E = 0 {\displaystyle \mathbf {E} =0} ), then the force will be perpendicular to the charge's motion, just as in the case of uniform circular motion studied above, and the charge will circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency ω = q B / m {\displaystyle \omega =qB/m} .: 222  Mass spectrometry works by applying electric and/or magnetic fields to moving charges and measuring the resulting acceleration, which by the Lorentz force law yields the mass-to-charge ratio. Collections of charged bodies do not always obey Newton's third law: there can be a change of one body's momentum without a compensatory change in the momentum of another. The discrepancy is accounted for by momentum carried by the electromagnetic field itself. The momentum per unit volume of the electromagnetic field is proportional to the Poynting vector.: 184  There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism predicts that electromagnetic waves will travel through empty space at a constant, definite speed. Thus, some inertial observers seemingly have a privileged status over the others, namely those who measure the speed of light and find it to be the value predicted by the Maxwell equations. In other words, light provides an absolute standard for speed, yet the principle of inertia holds that there should be no such standard. This tension is resolved in the theory of special relativity, which revises the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum. === Special relativity === In special relativity, the rule that Wilczek called "Newton's Zeroth Law" breaks down: the mass of a composite object is not merely the sum of the masses of the individual pieces.: 33  Newton's first law, inertial motion, remains true. A form of Newton's second law, that force is the rate of change of momentum, also holds, as does the conservation of momentum. However, the definition of momentum is modified. Among the consequences of this is the fact that the more quickly a body moves, the harder it is to accelerate, and so, no matter how much force is applied, a body cannot be accelerated to the speed of light. Depending on the problem at hand, momentum in special relativity can be represented as a three-dimensional vector, p = m γ v {\displaystyle \mathbf {p} =m\gamma \mathbf {v} } , where m {\displaystyle m} is the body's rest mass and γ {\displaystyle \gamma } is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors.: 107  Newton's third law must be modified in special relativity. The third law refers to the forces between two bodies at the same moment in time, and a key feature of special relativity is that simultaneity is relative. Events that happen at the same time relative to one observer can happen at different times relative to another. So, in a given observer's frame of reference, action and reaction may not be exactly opposite, and the total momentum of interacting bodies may not be conserved. The conservation of momentum is restored by including the momentum stored in the field that describes the bodies' interaction. Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light.: 131  === General relativity === General relativity is a theory of gravity that advances beyond that of Newton. In general relativity, the gravitational force of Newtonian mechanics is reimagined as curvature of spacetime. A curved path like an orbit, attributed to a gravitational force in Newtonian mechanics, is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express.: 43  The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light.: 327  === Quantum mechanics === Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. The Ehrenfest theorem provides a connection between quantum expectation values and Newton's second law, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, position and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. == History == The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both. As noted by scholar I. Bernard Cohen, Newton's work was more than a mere synthesis of previous results, as he selected certain ideas and further transformed them, with each in a new form that was useful to him, while at the same time proving false of certain basic or fundamental principles of scientists such as Galileo Galilei, Johannes Kepler, René Descartes, and Nicolaus Copernicus. He approached natural philosophy with mathematics in a completely novel way, in that instead of a preconceived natural philosophy, his style was to begin with a mathematical construct, and build on from there, comparing it to the real world to show that his system accurately accounted for it. === Antiquity and medieval background === ==== Aristotle and "violent" motion ==== The subject of physics is often traced back to Aristotle, but the history of the concepts involved is obscured by multiple factors. An exact correspondence between Aristotelian and modern concepts is not simple to establish: Aristotle did not clearly distinguish what we would call speed and force, used the same term for density and viscosity, and conceived of motion as always through a medium, rather than through space. In addition, some concepts often termed "Aristotelian" might better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The "natural" motion of terrestrial solid matter was to fall downwards, whereas a "violent" motion could push a body sideways. Moreover, in Aristotelian physics, a "violent" motion requires an immediate cause; separated from the cause of its "violent" motion, a body would revert to its "natural" behavior. Yet, a javelin continues moving after it leaves the thrower's hand. Aristotle concluded that the air around the javelin must be imparted with the ability to move the javelin forward. ==== Philoponus and impetus ==== John Philoponus, a Byzantine Greek thinker active during the sixth century, found this absurd: the same medium, air, was somehow responsible both for sustaining motion and for impeding it. If Aristotle's idea were true, Philoponus said, armies would launch weapons by blowing upon them with bellows. Philoponus argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move.: 47  In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics. === Inertia and the first law === The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death. The modern concept of inertia is credited to Galileo. Based on his experiments, Galileo concluded that the "natural" behavior of a moving body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:Imagine any particle projected along a horizontal plane without friction; then we know, from what has been more fully explained in the preceding pages, that this particle will move along this same plane with a motion which is uniform and perpetual, provided the plane has no limits.Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would be codified into Newton's first law. Galileo thought that a body moving a long distance inertially would follow the curve of the Earth. This idea was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down. First Law of Nature: Each thing when left to itself continues in the same state; so any moving body goes on moving until something stops it.Second Law of Nature: Each moving thing if left to itself moves in a straight line; so any body moving in a circle always tends to move away from the centre of the circle. According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione. Hypothesis I: Any body already in motion will continue to move perpetually with the same speed and in a straight line unless it is impeded. According to Huygens, this law was already known by Galileo and Descartes among others. === Force and the second law === Christiaan Huygens, in his Horologium Oscillatorium (1673), put forth the hypothesis that "By the action of gravity, whatever its sources, it happens that bodies are moved by a motion composed both of a uniform motion in one direction or another and of a motion downward due to gravity." Newton's second law generalized this hypothesis from gravity to all forces. One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally, despite being separated by millions of kilometres. This contrasts with the idea, championed by Descartes among others, that the Sun's gravity held planets in orbit by swirling them in a vortex of transparent matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed. === Momentum conservation and the third law === Johannes Kepler suggested that gravitational attractions were reciprocal — that, for example, the Moon pulls on the Earth while the Earth pulls on the Moon — but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy (1644), Descartes introduced the idea that during a collision between bodies, a "quantity of motion" remains unchanged. Descartes defined this quantity somewhat imprecisely by adding up the products of the speed and "size" of each body, where "size" for him incorporated both volume and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved. During the 1650s, Huygens studied collisions between hard spheres and deduced a principle that is now identified as the conservation of momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law. Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens, he listed four laws: the principle of inertia, the change of motion by force, a statement about relative motion that would today be called Galilean invariance, and the rule that interactions between bodies do not change the motion of their center of mass. In a later manuscript, Newton added a law of action and reaction, while saying that this law and the law regarding the center of mass implied one another. Newton probably settled on the presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685. === After the Principia === Newton expressed his second law by saying that the force on a body is proportional to its change of motion, or momentum. By the time he wrote the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous.: 15  Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as F = m a {\displaystyle F=ma} . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides. The concept of energy became a key part of Newtonian mechanics in the post-Newton period. Huygens' solution of the collision of hard spheres showed that in that case, not only is momentum conserved, but kinetic energy is as well (or, rather, a quantity that in retrospect we can identify as one-half the total kinetic energy). The question of what is conserved during all other processes, like inelastic collisions and motion slowed by friction, was not resolved until the 19th century. Debates on this topic overlapped with philosophical disputes between the metaphysical views of Newton and Leibniz, and variants of the term "force" were sometimes used to denote what we would call types of energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force is that which a body has when it is in actual motion." In modern terminology, "dead force" and "living force" correspond to potential energy and kinetic energy respectively. Conservation of energy was not established as a universal principle until it was understood that the energy of mechanical work can be dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could then be derived within formulations of classical mechanics that put energy first, as in the Lagrangian and Hamiltonian formulations described above. Modern presentations of Newton's laws use the mathematics of vectors, a topic that was not developed until the late 19th and early 20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton. == See also == Euler's laws of motion History of classical mechanics List of eponymous laws List of equations in classical mechanics List of scientific laws named after people List of textbooks on classical mechanics and quantum mechanics Norton's dome == Notes == == References == == Further reading == Newton’s Laws of Dynamics - The Feynman Lectures on Physics Chakrabarty, Deepto; Dourmashkin, Peter; Tomasik, Michelle; Frebel, Anna; Vuletic, Vladan (2016). "Classical Mechanics". MIT OpenCourseWare. Retrieved 17 January 2022.
Wikipedia/Newtonian_mechanics
In mathematics, a group action of a group G {\displaystyle G} on a set S {\displaystyle S} is a group homomorphism from G {\displaystyle G} to some group (under function composition) of functions from S {\displaystyle S} to itself. It is said that G {\displaystyle G} acts on S {\displaystyle S} . Many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles. If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron. A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group GL ⁡ ( n , K ) {\displaystyle \operatorname {GL} (n,K)} , the group of the invertible matrices of dimension n {\displaystyle n} over a field K {\displaystyle K} . The symmetric group S n {\displaystyle S_{n}} acts on any set with n {\displaystyle n} elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality. == Definition == === Left group action === If G {\displaystyle G} is a group with identity element e {\displaystyle e} , and X {\displaystyle X} is a set, then a (left) group action α {\displaystyle \alpha } of G {\displaystyle G} on X is a function α : G × X → X {\displaystyle \alpha :G\times X\to X} that satisfies the following two axioms: for all g and h in G and all x in X {\displaystyle X} . The group G {\displaystyle G} is then said to act on X {\displaystyle X} (from the left). A set X {\displaystyle X} together with an action of G {\displaystyle G} is called a (left) G {\displaystyle G} -set. It can be notationally convenient to curry the action α {\displaystyle \alpha } , so that, instead, one has a collection of transformations αg : X → X, with one transformation αg for each group element g ∈ G. The identity and compatibility relations then read α e ( x ) = x {\displaystyle \alpha _{e}(x)=x} and α g ( α h ( x ) ) = ( α g ∘ α h ) ( x ) = α g h ( x ) {\displaystyle \alpha _{g}(\alpha _{h}(x))=(\alpha _{g}\circ \alpha _{h})(x)=\alpha _{gh}(x)} The second axiom states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as α g ∘ α h = α g h {\displaystyle \alpha _{g}\circ \alpha _{h}=\alpha _{gh}} . With the above understanding, it is very common to avoid writing α {\displaystyle \alpha } entirely, and to replace it with either a dot, or with nothing at all. Thus, α(g, x) can be shortened to g⋅x or gx, especially when the action is clear from context. The axioms are then e ⋅ x = x {\displaystyle e{\cdot }x=x} g ⋅ ( h ⋅ x ) = ( g h ) ⋅ x {\displaystyle g{\cdot }(h{\cdot }x)=(gh){\cdot }x} From these two axioms, it follows that for any fixed g in G {\displaystyle G} , the function from X to itself which maps x to g⋅x is a bijection, with inverse bijection the corresponding map for g−1. Therefore, one may equivalently define a group action of G on X as a group homomorphism from G into the symmetric group Sym(X) of all bijections from X to itself. === Right group action === Likewise, a right group action of G {\displaystyle G} on X {\displaystyle X} is a function α : X × G → X , {\displaystyle \alpha :X\times G\to X,} that satisfies the analogous axioms: (with α(x, g) often shortened to xg or x⋅g when the action being considered is clear from context) for all g and h in G and all x in X. The difference between left and right actions is in the order in which a product gh acts on x. For a left action, h acts first, followed by g second. For a right action, g acts first, followed by h second. Because of the formula (gh)−1 = h−1g−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group G on X can be considered as a left action of its opposite group Gop on X. Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively. == Notable properties of actions == Let G be a group acting on a set X. The action is called faithful or effective if g⋅x = x for all x ∈ X implies that g = eG. Equivalently, the homomorphism from G to the group of bijections of X corresponding to the action is injective. The action is called free (or semiregular or fixed-point free) if the statement that g⋅x = x for some x ∈ X already implies that g = eG. In other words, no non-trivial element of G fixes a point of X. This is a much stronger property than faithfulness. For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (Z / 2Z)n (of cardinality 2n) acts faithfully on a set of size 2n. This is not always the case, for example the cyclic group Z / 2nZ cannot act faithfully on a set of size less than 2n. In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group S5, the icosahedral group A5 × Z / 2Z and the cyclic group Z / 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively. === Transitivity properties === The action of G on X is called transitive if for any two points x, y ∈ X there exists a g ∈ G so that g ⋅ x = y. The action is simply transitive (or sharply transitive, or regular) if it is both transitive and free. This means that given x, y ∈ X there is exactly one g ∈ G such that g ⋅ x = y. If X is acted upon simply transitively by a group G then it is called a principal homogeneous space for G or a G-torsor. For an integer n ≥ 1, the action is n-transitive if X has at least n elements, and for any pair of n-tuples (x1, ..., xn), (y1, ..., yn) ∈ Xn with pairwise distinct entries (that is xi ≠ xj, yi ≠ yj when i ≠ j) there exists a g ∈ G such that g⋅xi = yi for i = 1, ..., n. In other words, the action on the subset of Xn of tuples without repeated entries is transitive. For n = 2, 3 this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory. An action is sharply n-transitive when the action on tuples without repeated entries in Xn is sharply transitive. ==== Examples ==== The action of the symmetric group of X is transitive, in fact n-transitive for any n up to the cardinality of X. If X has cardinality n, the action of the alternating group is (n − 2)-transitive but not (n − 1)-transitive. The action of the general linear group of a vector space V on the set V ∖ {0} of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of v is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere. === Primitive actions === The action of G on X is called primitive if there is no partition of X preserved by all elements of G apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons). === Topological properties === Assume that X is a topological space and the action of G is by homeomorphisms. The action is wandering if every x ∈ X has a neighbourhood U such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. More generally, a point x ∈ X is called a point of discontinuity for the action of G if there is an open subset U ∋ x such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. The domain of discontinuity of the action is the set of all points of discontinuity. Equivalently it is the largest G-stable open subset Ω ⊂ X such that the action of G on Ω is wandering. In a dynamical context this is also called a wandering set. The action is properly discontinuous if for every compact subset K ⊂ X there are only finitely many g ∈ G such that g⋅K ∩ K ≠ ∅. This is strictly stronger than wandering; for instance the action of Z on R2 ∖ {(0, 0)} given by n⋅(x, y) = (2nx, 2−ny) is wandering and free but not properly discontinuous. The action by deck transformations of the fundamental group of a locally simply connected space on a universal cover is wandering and free. Such actions can be characterized by the following property: every x ∈ X has a neighbourhood U such that g⋅U ∩ U = ∅ for every g ∈ G ∖ {eG}. Actions with this property are sometimes called freely discontinuous, and the largest subset on which the action is freely discontinuous is then called the free regular set. An action of a group G on a locally compact space X is called cocompact if there exists a compact subset A ⊂ X such that X = G ⋅ A. For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space X / G. === Actions of topological groups === Now assume G is a topological group and X a topological space on which it acts by homeomorphisms. The action is said to be continuous if the map G × X → X is continuous for the product topology. The action is said to be proper if the map G × X → X × X defined by (g, x) ↦ (x, g⋅x) is proper. This means that given compact sets K, K′ the set of g ∈ G such that g⋅K ∩ K′ ≠ ∅ is compact. In particular, this is equivalent to proper discontinuity if G is a discrete group. It is said to be locally free if there exists a neighbourhood U of eG such that g⋅x ≠ x for all x ∈ X and g ∈ U ∖ {eG}. The action is said to be strongly continuous if the orbital map g ↦ g⋅x is continuous for every x ∈ X. Contrary to what the name suggests, this is a weaker property than continuity of the action. If G is a Lie group and X a differentiable manifold, then the subspace of smooth points for the action is the set of points x ∈ X such that the map g ↦ g⋅x is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space. === Linear actions === If g acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero g-invariant submodules. It is said to be semisimple if it decomposes as a direct sum of irreducible actions. == Orbits and stabilizers == Consider a group G acting on a set X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G. The orbit of x is denoted by G⋅x: G ⋅ x = { g ⋅ x : g ∈ G } . {\displaystyle G{\cdot }x=\{g{\cdot }x:g\in G\}.} The defining properties of a group guarantee that the set of orbits of (points x in) X under the action of G form a partition of X. The associated equivalence relation is defined by saying x ~ y if and only if there exists a g in G with g⋅x = y. The orbits are then the equivalence classes under this relation; two elements x and y are equivalent if and only if their orbits are the same, that is, G⋅x = G⋅y. The group action is transitive if and only if it has exactly one orbit, that is, if there exists x in X with G⋅x = X. This is the case if and only if G⋅x = X for all x in X (given that X is non-empty). The set of all orbits of X under the action of G is written as X / G (or, less frequently, as G \ X), and is called the quotient of the action. In geometric situations it may be called the orbit space, while in algebraic situations it may be called the space of coinvariants, and written XG, by contrast with the invariants (fixed points), denoted XG: the coinvariants are a quotient while the invariants are a subset. The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention. === Invariant subsets === If Y is a subset of X, then G⋅Y denotes the set {g⋅y : g ∈ G and y ∈ Y}. The subset Y is said to be invariant under G if G⋅Y = Y (which is equivalent G⋅Y ⊆ Y). In that case, G also operates on Y by restricting the action to Y. The subset Y is called fixed under G if g⋅y = y for all g in G and all y in Y. Every subset that is fixed under G is also invariant under G, but not conversely. Every orbit is an invariant subset of X on which G acts transitively. Conversely, any invariant subset of X is a union of orbits. The action of G on X is transitive if and only if all elements are equivalent, meaning that there is only one orbit. A G-invariant element of X is x ∈ X such that g⋅x = x for all g ∈ G. The set of all such x is denoted XG and called the G-invariants of X. When X is a G-module, XG is the zeroth cohomology group of G with coefficients in X, and the higher cohomology groups are the derived functors of the functor of G-invariants. === Fixed points and stabilizer subgroups === Given g in G and x in X with g⋅x = x, it is said that "x is a fixed point of g" or that "g fixes x". For every x in X, the stabilizer subgroup of G with respect to x (also called the isotropy group or little group) is the set of all elements in G that fix x: G x = { g ∈ G : g ⋅ x = x } . {\displaystyle G_{x}=\{g\in G:g{\cdot }x=x\}.} This is a subgroup of G, though typically not a normal one. The action of G on X is free if and only if all stabilizers are trivial. The kernel N of the homomorphism with the symmetric group, G → Sym(X), is given by the intersection of the stabilizers Gx for all x in X. If N is trivial, the action is said to be faithful (or effective). Let x and y be two elements in X, and let g be a group element such that y = g⋅x. Then the two stabilizer groups Gx and Gy are related by Gy = gGxg−1. Proof: by definition, h ∈ Gy if and only if h⋅(g⋅x) = g⋅x. Applying g−1 to both sides of this equality yields (g−1hg)⋅x = x; that is, g−1hg ∈ Gx. An opposite inclusion follows similarly by taking h ∈ Gx and x = g−1⋅y. The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of G (that is, the set of all conjugates of the subgroup). Let (H) denote the conjugacy class of H. Then the orbit O has type (H) if the stabilizer Gx of some/any x in O belongs to (H). A maximal orbit type is often called a principal orbit type. === Orbit-stabilizer theorem === Orbits and stabilizers are closely related. For a fixed x in X, consider the map f : G → X given by g ↦ g⋅x. By definition the image f(G) of this map is the orbit G⋅x. The condition for two elements to have the same image is f ( g ) = f ( h ) ⟺ g ⋅ x = h ⋅ x ⟺ g − 1 h ⋅ x = x ⟺ g − 1 h ∈ G x ⟺ h ∈ g G x . {\displaystyle f(g)=f(h)\iff g{\cdot }x=h{\cdot }x\iff g^{-1}h{\cdot }x=x\iff g^{-1}h\in G_{x}\iff h\in gG_{x}.} In other words, f(g) = f(h) if and only if g and h lie in the same coset for the stabilizer subgroup Gx. Thus, the fiber f−1({y}) of f over any y in G⋅x is contained in such a coset, and every such coset also occurs as a fiber. Therefore f induces a bijection between the set G / Gx of cosets for the stabilizer subgroup and the orbit G⋅x, which sends gGx ↦ g⋅x. This result is known as the orbit-stabilizer theorem. If G is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives | G ⋅ x | = [ G : G x ] = | G | / | G x | , {\displaystyle |G\cdot x|=[G\,:\,G_{x}]=|G|/|G_{x}|,} in other words the length of the orbit of x times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order. Example: Let G be a group of prime order p acting on a set X with k elements. Since each orbit has either 1 or p elements, there are at least k mod p orbits of length 1 which are G-invariant elements. More specifically, k and the number of G-invariant elements are congruent modulo p. This result is especially useful since it can be employed for counting arguments (typically in situations where X is finite as well). Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let G denote its automorphism group. Then G acts on the set of vertices {1, 2, ..., 8}, and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, |G| = |G ⋅ 1| |G1| = 8 |G1|. Applying the theorem now to the stabilizer G1, we can obtain |G1| = |(G1) ⋅ 2| |(G1)2|. Any element of G that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by 2π/3, which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, |(G1) ⋅ 2| = 3. Applying the theorem a third time gives |(G1)2| = |((G1)2) ⋅ 3| |((G1)2)3|. Any element of G that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus |((G1)2) ⋅ 3| = 2. One also sees that ((G1)2)3 consists only of the identity automorphism, as any element of G fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain |G| = 8 ⋅ 3 ⋅ 2 ⋅ 1 = 48. === Burnside's lemma === A result closely related to the orbit-stabilizer theorem is Burnside's lemma: | X / G | = 1 | G | ∑ g ∈ G | X g | , {\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|,} where Xg is the set of points fixed by g. This result is mainly of use when G and X are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. Fixing a group G, the set of formal differences of finite G-sets forms a ring called the Burnside ring of G, where addition corresponds to disjoint union, and multiplication to Cartesian product. == Examples == The trivial action of any group G on any set X is defined by g⋅x = x for all g in G and all x in X; that is, every group element induces the identity permutation on X. In every group G, left multiplication is an action of G on G: g⋅x = gx for all g, x in G. This action is free and transitive (regular), and forms the basis of a rapid proof of Cayley's theorem – that every group is isomorphic to a subgroup of the symmetric group of permutations of the set G. In every group G with subgroup H, left multiplication is an action of G on the set of cosets G / H: g⋅aH = gaH for all g, a in G. In particular if H contains no nontrivial normal subgroups of G this induces an isomorphism from G to a subgroup of the permutation group of degree [G : H]. In every group G, conjugation is an action of G on G: g⋅x = gxg−1. An exponential notation is commonly used for the right-action variant: xg = g−1xg; it satisfies (xg)h = xgh. In every group G with subgroup H, conjugation is an action of G on conjugates of H: g⋅K = gKg−1 for all g in G and K conjugates of H. An action of Z on a set X uniquely determines and is determined by an automorphism of X, given by the action of 1. Similarly, an action of Z / 2Z on X is equivalent to the data of an involution of X. The symmetric group Sn and its subgroups act on the set {1, ..., n} by permuting its elements The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. It also acts on the set of faces or the set of edges of the polyhedron. The symmetry group of any geometrical object acts on the set of points of that object. For a coordinate space V over a field F with group of units F*, the mapping F* × V → V given by a × (x1, x2, ..., xn) ↦ (ax1, ax2, ..., axn) is a group action called scalar multiplication. The automorphism group of a vector space (or graph, or group, or ring ...) acts on the vector space (or set of vertices of the graph, or group, or ring ...). The general linear group GL(n, K) and its subgroups, particularly its Lie subgroups (including the special linear group SL(n, K), orthogonal group O(n, K), special orthogonal group SO(n, K), and symplectic group Sp(n, K)) are Lie groups that act on the vector space Kn. The group operations are given by multiplying the matrices from the groups with the vectors from Kn. The general linear group GL(n, Z) acts on Zn by natural matrix action. The orbits of its action are classified by the greatest common divisor of coordinates of the vector in Zn. The affine group acts transitively on the points of an affine space, and the subgroup V of the affine group (that is, a vector space) has transitive and free (that is, regular) action on these points; indeed this can be used to give a definition of an affine space. The projective linear group PGL(n + 1, K) and its subgroups, particularly its Lie subgroups, which are Lie groups that act on the projective space Pn(K). This is a quotient of the action of the general linear group on projective space. Particularly notable is PGL(2, K), the symmetries of the projective line, which is sharply 3-transitive, preserving the cross ratio; the Möbius group PGL(2, C) is of particular interest. The isometries of the plane act on the set of 2D images and patterns, such as wallpaper patterns. The definition can be made more precise by specifying what is meant by image or pattern, for example, a function of position with values in a set of colors. Isometries are in fact one example of affine group (action). The sets acted on by a group G comprise the category of G-sets in which the objects are G-sets and the morphisms are G-set homomorphisms: functions f : X → Y such that g⋅(f(x)) = f(g⋅x) for every g in G. The Galois group of a field extension L / K acts on the field L but has only a trivial action on elements of the subfield K. Subgroups of Gal(L / K) correspond to subfields of L that contain K, that is, intermediate field extensions between L and K. The additive group of the real numbers (R, +) acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems) by time translation: if t is in R and x is in the phase space, then x describes a state of the system, and t + x is defined to be the state of the system t seconds later if t is positive or −t seconds ago if t is negative. The additive group of the real numbers (R, +) acts on the set of real functions of a real variable in various ways, with (t⋅f)(x) equal to, for example, f(x + t), f(x) + t, f(xet), f(x)et, f(x + t)et, or f(xet) + t, but not f(xet + t). Given a group action of G on X, we can define an induced action of G on the power set of X, by setting g⋅U = {g⋅u : u ∈ U} for every subset U of X and every g in G. This is useful, for instance, in studying the action of the large Mathieu group on a 24-set and in studying symmetry in certain models of finite geometries. The quaternions with norm 1 (the versors), as a multiplicative group, act on R3: for any such quaternion z = cos α/2 + v sin α/2, the mapping f(x) = zxz* is a counterclockwise rotation through an angle α about an axis given by a unit vector v; z is the same rotation; see quaternions and spatial rotation. This is not a faithful action because the quaternion −1 leaves all points where they were, as does the quaternion 1. Given left G-sets X, Y, there is a left G-set YX whose elements are G-equivariant maps α : X × G → Y, and with left G-action given by g⋅α = α ∘ (idX × –g) (where "–g" indicates right multiplication by g). This G-set has the property that its fixed points correspond to equivariant maps X → Y; more generally, it is an exponential object in the category of G-sets. == Group actions and groupoids == The notion of group action can be encoded by the action groupoid G′ = G ⋉ X associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components. == Morphisms and isomorphisms between G-sets == If X and Y are two G-sets, a morphism from X to Y is a function f : X → Y such that f(g⋅x) = g⋅f(x) for all g in G and all x in X. Morphisms of G-sets are also called equivariant maps or G-maps. The composition of two morphisms is again a morphism. If a morphism f is bijective, then its inverse is also a morphism. In this case f is called an isomorphism, and the two G-sets X and Y are called isomorphic; for all practical purposes, isomorphic G-sets are indistinguishable. Some example isomorphisms: Every regular G action is isomorphic to the action of G on G given by left multiplication. Every free G action is isomorphic to G × S, where S is some set and G acts on G × S by left multiplication on the first coordinate. (S can be taken to be the set of orbits X / G.) Every transitive G action is isomorphic to left multiplication by G on the set of left cosets of some subgroup H of G. (H can be taken to be the stabilizer group of any element of the original G-set.) With this notion of morphism, the collection of all G-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean). == Variants and generalizations == We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action. Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object X of some category, and then define an action on X as a monoid homomorphism into the monoid of endomorphisms of X. If X has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. We can view a group G as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from G to the category of sets, and a group representation is a functor from G to the category of vector spaces. A morphism between G-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category. == Gallery == == See also == Gain graph Group with operators Measurable group action Monoid action Young–Deruyts development == Notes == == Citations == == References == Aschbacher, Michael (2000). Finite Group Theory. Cambridge University Press. ISBN 978-0-521-78675-1. MR 1777008. Dummit, David; Richard Foote (2003). Abstract Algebra (3rd ed.). Wiley. ISBN 0-471-43334-9. Eie, Minking; Chang, Shou-Te (2010). A Course on Abstract Algebra. World Scientific. ISBN 978-981-4271-88-2. Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 978-0-521-79540-1, MR 1867354. Rotman, Joseph (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics 148 (4th ed.). Springer-Verlag. ISBN 0-387-94285-8. Smith, Jonathan D.H. (2008). Introduction to abstract algebra. Textbooks in mathematics. CRC Press. ISBN 978-1-4200-6371-4. Kapovich, Michael (2009), Hyperbolic manifolds and discrete groups, Modern Birkhäuser Classics, Birkhäuser, pp. xxvii+467, ISBN 978-0-8176-4912-8, Zbl 1180.57001 Maskit, Bernard (1988), Kleinian groups, Grundlehren der Mathematischen Wissenschaften, vol. 287, Springer-Verlag, pp. XIII+326, Zbl 0627.30039 Perrone, Paolo (2024), Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1 Thurston, William (1980), The geometry and topology of three-manifolds, Princeton lecture notes, p. 175, archived from the original on 2020-07-27, retrieved 2016-02-08 Thurston, William P. (1997), Three-dimensional geometry and topology. Vol. 1., Princeton Mathematical Series, vol. 35, Princeton University Press, pp. x+311, Zbl 0873.57001 tom Dieck, Tammo (1987), Transformation groups, de Gruyter Studies in Mathematics, vol. 8, Berlin: Walter de Gruyter & Co., p. 29, doi:10.1515/9783110858372.312, ISBN 978-3-11-009745-0, MR 0889050 == External links == "Action of a group on a manifold", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Group Action". MathWorld.
Wikipedia/Group_transformation
In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force F is constant and the angle θ between the force and the displacement s is also constant, then the work done is given by: W = F s cos ⁡ θ {\displaystyle W=Fs\cos {\theta }} If the force is variable, then work is given by the line integral: W = ∫ F ⋅ d s {\displaystyle W=\int \mathbf {F} \cdot d\mathbf {s} } where d s {\displaystyle d\mathbf {s} } is the tiny change in displacement vector. Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. == History == The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it. === Early concepts of work === Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote: Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's. === Etymology and modern usage === The term work (or mechanical work), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet. Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". The concept of virtual work, and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. == Units == The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889). According to the International Bureau of Weights and Measures it is defined as "the work done when the point of application of 1 MKS unit of force [newton] moves a distance of 1 metre in the direction of the force." The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit. == Work and energy == The work W done by a constant force of magnitude F on a point that moves a displacement s in a straight line in the direction of the force is the product W = F ⋅ s {\displaystyle W=\mathbf {F} \cdot \mathbf {s} } For example, if a force of 10 newtons (F = 10 N) acts along a point that travels 2 metres (s = 2 m), then W = Fs = (10 N) (2 m) = 20 J. This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy Ek corresponding to the linear velocity and angular velocity of that body, W = Δ E k . {\displaystyle W=\Delta E_{\text{k}}.} The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy Ep of the object, W = − Δ E p . {\displaystyle W=-\Delta E_{\text{p}}.} These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. == Constraint forces == Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is F = qv × B, where q is the charge, v is the velocity of the particle, and B is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v. The dot product of two perpendicular vectors is always zero, so the work W = F ⋅ v = 0, and the magnetic force does not do work. It can change the direction of motion but never change the speed. == Mathematical calculation == For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. Work is the result of a force on a point that follows a curve X, with a velocity v, at each instant. The small amount of work δW that occurs over an instant of time dt is calculated as δ W = F ⋅ d s = F ⋅ v d t {\displaystyle \delta W=\mathbf {F} \cdot d\mathbf {s} =\mathbf {F} \cdot \mathbf {v} dt} where the F ⋅ v is the power over the instant dt. The sum of these small amounts of work over the trajectory of the point yields the work, W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F ⋅ d s d t d t = ∫ C F ⋅ d s , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\tfrac {d\mathbf {s} }{dt}}\,dt=\int _{C}\mathbf {F} \cdot d\mathbf {s} ,} where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent. If the force is always directed along this line, and the magnitude of the force is F, then this integral simplifies to W = ∫ C F d s {\displaystyle W=\int _{C}F\,ds} where s is displacement along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to W = ∫ C F d s = F ∫ C d s = F s {\displaystyle W=\int _{C}F\,ds=F\int _{C}ds=Fs} where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F ⋅ ds = F cos θ ds, where θ is the angle between the force vector and the direction of movement, that is W = ∫ C F ⋅ d s = F s cos ⁡ θ . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} =Fs\cos \theta .} When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. === Work done by a variable force === Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (F cos(θ), where θ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. If the displacement as a variable of time is given by ∆x(t), then work done by the variable force from t1 to t2 is: W = ∫ t 1 t 2 F ( t ) ⋅ v ( t ) d t = ∫ t 1 t 2 P ( t ) d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} (t)\cdot \mathbf {v} (t)dt=\int _{t_{1}}^{t_{2}}P(t)dt.} Thus, the work done for a variable force can be expressed as a definite integral of power over time. === Torque and rotation === A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as δ W = T ⋅ ω d t , {\displaystyle \delta W=\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt,} where the T ⋅ ω is the power over the instant dt. The sum of these small amounts of work over the trajectory of the rigid body yields the work, W = ∫ t 1 t 2 T ⋅ ω d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt.} This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent. If the angular velocity vector maintains a constant direction, then it takes the form, ω = ϕ ˙ S , {\displaystyle {\boldsymbol {\omega }}={\dot {\phi }}\mathbf {S} ,} where ϕ {\displaystyle \phi } is the angle of rotation about the constant unit vector S. In this case, the work of the torque becomes, W = ∫ t 1 t 2 T ⋅ ω d t = ∫ t 1 t 2 T ⋅ S d ϕ d t d t = ∫ C T ⋅ S d ϕ , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot \mathbf {S} {\frac {d\phi }{dt}}dt=\int _{C}\mathbf {T} \cdot \mathbf {S} \,d\phi ,} where C is the trajectory from ϕ ( t 1 ) {\displaystyle \phi (t_{1})} to ϕ ( t 2 ) {\displaystyle \phi (t_{2})} . This integral depends on the rotational trajectory ϕ ( t ) {\displaystyle \phi (t)} , and is therefore path-dependent. If the torque τ {\displaystyle \tau } is aligned with the angular velocity vector so that, T = τ S , {\displaystyle \mathbf {T} =\tau \mathbf {S} ,} and both the torque and angular velocity are constant, then the work takes the form, W = ∫ t 1 t 2 τ ϕ ˙ d t = τ ( ϕ 2 − ϕ 1 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\tau {\dot {\phi }}\,dt=\tau (\phi _{2}-\phi _{1}).} This result can be understood more simply by considering the torque as arising from a force of constant magnitude F, being applied perpendicularly to a lever arm at a distance r {\displaystyle r} , as shown in the figure. This force will act through the distance along the circular arc l = s = r ϕ {\displaystyle l=s=r\phi } , so the work done is W = F s = F r ϕ . {\displaystyle W=Fs=Fr\phi .} Introduce the torque τ = Fr, to obtain W = F r ϕ = τ ϕ , {\displaystyle W=Fr\phi =\tau \phi ,} as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. == Work and potential energy == The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C = x(t), defines the work input to the system by the force. === Path dependence === Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral: W = ∫ C F ⋅ d x = ∫ t 1 t 2 F ⋅ v d t , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt,} where dx(t) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, d W d t = P ( t ) = F ⋅ v . {\displaystyle {\frac {dW}{dt}}=P(t)=\mathbf {F} \cdot \mathbf {v} .} === Path independence === If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function U(x), that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = ∫ x ( t 1 ) x ( t 2 ) F ⋅ d x = U ( x ( t 1 ) ) − U ( x ( t 2 ) ) . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{\mathbf {x} (t_{1})}^{\mathbf {x} (t_{2})}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} (t_{1}))-U(\mathbf {x} (t_{2})).} The function U(x) is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle \nabla W=-\nabla U=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential." Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-\nabla U\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} === Work by gravity === In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is g = 9.8 m⋅s−2 and the gravitational force on an object of mass m is Fg = mg. It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight mg is displaced upwards or downwards a vertical distance y2 − y1, the work W done on the object is: W = F g ( y 2 − y 1 ) = F g Δ y = m g Δ y {\displaystyle W=F_{g}(y_{2}-y_{1})=F_{g}\Delta y=mg\Delta y} where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. ==== Gravity in 3D space ==== The force of gravity exerted by a mass M on another mass m is given by F = − G M m r 2 r ^ = − G M m r 3 r , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }}=-{\frac {GMm}{r^{3}}}\mathbf {r} ,} where r is the position vector from M to m and r̂ is the unit vector in the direction of r. Let the mass m move at the velocity v; then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} Notice that the position and velocity of the mass m are given by r = r e r , v = d r d t = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\frac {d\mathbf {r} }{dt}}={\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m, and we use the fact that d e r / d t = θ ˙ e t . {\displaystyle d\mathbf {e} _{r}/dt={\dot {\theta }}\mathbf {e} _{t}.} Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{r})\cdot \left({\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t}\right)dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} The function U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy. === Work by a spring === Consider a spring that exerts a horizontal force F = (−kx, 0, 0) that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve X(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − 1 2 k x 2 . {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} dt=-\int _{0}^{t}kxv_{x}dt=-{\frac {1}{2}}kx^{2}.} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvxdt, over time t is ⁠1/2⁠x2. The work is the product of the distance times the spring force, which is also dependent on distance; hence the x2 result. === Work by a gas === The work W {\displaystyle W} done by a body of gas on its surroundings is: W = ∫ a b P d V {\displaystyle W=\int _{a}^{b}P\,dV} where P is pressure, V is volume, and a and b are initial and final volumes. == Work–energy principle == The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy E k {\displaystyle E_{\text{k}}} , W = Δ E k = 1 2 m v 2 2 − 1 2 m v 1 2 {\displaystyle W=\Delta E_{\text{k}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are the speeds of the particle before and after the work is done, and m is its mass. The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics. This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. === Derivation for a particle moving along a straight line === In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation F = ma (Newton's second law), and the particle displacement s can be expressed by the equation s = v 2 2 − v 1 2 2 a {\displaystyle s={\frac {v_{2}^{2}-v_{1}^{2}}{2a}}} which follows from v 2 2 = v 1 2 + 2 a s {\displaystyle v_{2}^{2}=v_{1}^{2}+2as} (see Equations of motion). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: W = F s = m a s = m a v 2 2 − v 1 2 2 a = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=ma{\frac {v_{2}^{2}-v_{1}^{2}}{2a}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} Other derivation: W = F s = m a s = m v 2 2 − v 1 2 2 s s = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=m{\frac {v_{2}^{2}-v_{1}^{2}}{2s}}s={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F v d t = ∫ t 1 t 2 m a v d t = m ∫ t 1 t 2 v d v d t d t = m ∫ v 1 v 2 v d v = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=\int _{t_{1}}^{t_{2}}F\,v\,dt=\int _{t_{1}}^{t_{2}}ma\,v\,dt=m\int _{t_{1}}^{t_{2}}v\,{\frac {dv}{dt}}\,dt=m\int _{v_{1}}^{v_{2}}v\,dv={\tfrac {1}{2}}m\left(v_{2}^{2}-v_{1}^{2}\right).} === General derivation of the work–energy principle for a particle === For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle: W = ∫ t 1 t 2 F ⋅ v d t = m ∫ t 1 t 2 a ⋅ v d t = m 2 ∫ t 1 t 2 d v 2 d t d t = m 2 ∫ v 1 2 v 2 2 d v 2 = m v 2 2 2 − m v 1 2 2 = Δ E k {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=m\int _{t_{1}}^{t_{2}}\mathbf {a} \cdot \mathbf {v} dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {dv^{2}}{dt}}\,dt={\frac {m}{2}}\int _{v_{1}^{2}}^{v_{2}^{2}}dv^{2}={\frac {mv_{2}^{2}}{2}}-{\frac {mv_{1}^{2}}{2}}=\Delta E_{\text{k}}} The identity a ⋅ v = 1 2 d v 2 d t {\textstyle \mathbf {a} \cdot \mathbf {v} ={\frac {1}{2}}{\frac {dv^{2}}{dt}}} requires some algebra. From the identity v 2 = v ⋅ v {\textstyle v^{2}=\mathbf {v} \cdot \mathbf {v} } and definition a = d v d t {\textstyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}} it follows d v 2 d t = d ( v ⋅ v ) d t = d v d t ⋅ v + v ⋅ d v d t = 2 d v d t ⋅ v = 2 a ⋅ v . {\displaystyle {\frac {dv^{2}}{dt}}={\frac {d(\mathbf {v} \cdot \mathbf {v} )}{dt}}={\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} +\mathbf {v} \cdot {\frac {d\mathbf {v} }{dt}}=2{\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} =2\mathbf {a} \cdot \mathbf {v} .} The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. === Derivation for a particle in constrained movement === In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory X(t) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R, then Newton's Law takes the form F + R = m X ¨ , {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }},} where m is the mass of the particle. ==== Vector formulation ==== Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields F ⋅ X ˙ = m X ¨ ⋅ X ˙ , {\displaystyle \mathbf {F} \cdot {\dot {\mathbf {X} }}=m{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X(t1) to the point X(t2) to obtain ∫ t 1 t 2 F ⋅ X ˙ d t = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t . {\displaystyle \int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt.} The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t1 to time t2. This can also be written as W = ∫ t 1 t 2 F ⋅ X ˙ d t = ∫ X ( t 1 ) X ( t 2 ) F ⋅ d X . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=\int _{\mathbf {X} (t_{1})}^{\mathbf {X} (t_{2})}\mathbf {F} \cdot d\mathbf {X} .} This integral is computed along the trajectory X(t) of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity 1 2 d d t ( X ˙ ⋅ X ˙ ) = X ¨ ⋅ X ˙ , {\displaystyle {\frac {1}{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})={\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, Δ K = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t = m 2 ∫ t 1 t 2 d d t ( X ˙ ⋅ X ˙ ) d t = m 2 X ˙ ⋅ X ˙ ( t 2 ) − m 2 X ˙ ⋅ X ˙ ( t 1 ) = 1 2 m Δ v 2 , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})dt={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{2})-{\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{1})={\frac {1}{2}}m\Delta \mathbf {v} ^{2},} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 X ˙ ⋅ X ˙ = 1 2 m v 2 {\displaystyle K={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}={\frac {1}{2}}m{\mathbf {v} ^{2}}} ==== Tangential and normal components ==== It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X(t), such that X ˙ = v T and X ¨ = v ˙ T + v 2 κ N , {\displaystyle {\dot {\mathbf {X} }}=v\mathbf {T} \quad {\text{and}}\quad {\ddot {\mathbf {X} }}={\dot {v}}\mathbf {T} +v^{2}\kappa \mathbf {N} ,} where v = | X ˙ | = X ˙ ⋅ X ˙ . {\displaystyle v=|{\dot {\mathbf {X} }}|={\sqrt {{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}}}.} Then, the scalar product of velocity with acceleration in Newton's second law takes the form Δ K = m ∫ t 1 t 2 v ˙ v d t = m 2 ∫ t 1 t 2 d d t v 2 d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\dot {v}}v\,dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}v^{2}\,dt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}),} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 v 2 = m 2 X ˙ ⋅ X ˙ . {\displaystyle K={\frac {m}{2}}v^{2}={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}.} The result is the work–energy principle for particle dynamics, W = Δ K . {\displaystyle W=\Delta K.} This derivation can be generalized to arbitrary rigid body systems. === Moving in a straight line (skid to a stop) === Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F. The constraint forces between the vehicle and the road define R, and we have F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} For convenience let the trajectory be along the X-axis, so X = (d, 0) and the velocity is V = (v, 0), then R ⋅ V = 0, and F ⋅ V = Fxv, where Fx is the component of F along the X-axis, so F x v = m v ˙ v . {\displaystyle F_{x}v=m{\dot {v}}v.} Integration of both sides yields ∫ t 1 t 2 F x v d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}F_{x}vdt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} If Fx is constant along the trajectory, then the integral of velocity is distance, so F x ( d ( t 2 ) − d ( t 1 ) ) = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle F_{x}(d(t_{2})-d(t_{1}))={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} As an example consider a car skidding to a stop, where k is the coefficient of friction and w is the weight of the car. Then the force along the trajectory is Fx = −kw. The velocity v of the car can be determined from the length s of the skid using the work–energy principle, k w s = w 2 g v 2 , or v = 2 k s g . {\displaystyle kws={\frac {w}{2g}}v^{2},\quad {\text{or}}\quad v={\sqrt {2ksg}}.} This formula uses the fact that the mass of the vehicle is m = w/g. === Coasting down an inclined surface (gravity racing) === Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V, of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be X(t) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F = (0, 0, w), while the force of the road on the vehicle is the constraint force R. Newton's second law yields, F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} The scalar product of this equation with the velocity, V = (vx, vy, vz), yields w v z = m V ˙ V , {\displaystyle wv_{z}=m{\dot {V}}V,} where V is the magnitude of V. The constraint forces between the vehicle and the road cancel from this equation because R ⋅ V = 0, which means they do no work. Integrate both sides to obtain ∫ t 1 t 2 w v z d t = m 2 V 2 ( t 2 ) − m 2 V 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}wv_{z}dt={\frac {m}{2}}V^{2}(t_{2})-{\frac {m}{2}}V^{2}(t_{1}).} The weight force w is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, w Δ z = m 2 V 2 . {\displaystyle w\Delta z={\frac {m}{2}}V^{2}.} Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least s = Δ z 0.06 = 8.3 V 2 g , or s = 8.3 88 2 32.2 ≈ 2000 f t . {\displaystyle s={\frac {\Delta z}{0.06}}=8.3{\frac {V^{2}}{g}},\quad {\text{or}}\quad s=8.3{\frac {88^{2}}{32.2}}\approx 2000\mathrm {ft} .} This formula uses the fact that the weight of the vehicle is w = mg. == Work of forces acting on a rigid body == The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body. The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n . {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n.} The velocity of the points Xi along their trajectories are V i = ω × ( X i − d ) + d ˙ , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }},} where ω is the angular velocity vector obtained from the skew symmetric matrix [ Ω ] = A ˙ A T , {\displaystyle [\Omega ]={\dot {A}}A^{\mathsf {T}},} known as the angular velocity matrix. The small amount of work by the forces over the small displacements δri can be determined by approximating the displacement by δr = vδt so δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + … + F n ⋅ V n δ t {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\ldots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t} or δ W = ∑ i = 1 n F i ⋅ ( ω × ( X i − d ) + d ˙ ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }})\delta t.} This formula can be rewritten to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d ˙ δ t + ( ∑ i = 1 n ( X i − d ) × F i ) ⋅ ω δ t = ( F ⋅ d ˙ + T ⋅ ω ) δ t , {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot {\dot {\mathbf {d} }}\delta t+\left(\sum _{i=1}^{n}\left(\mathbf {X} _{i}-\mathbf {d} \right)\times \mathbf {F} _{i}\right)\cdot {\boldsymbol {\omega }}\delta t=\left(\mathbf {F} \cdot {\dot {\mathbf {d} }}+\mathbf {T} \cdot {\boldsymbol {\omega }}\right)\delta t,} where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body. == References == == Bibliography == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd ed., extended version ed.). W. H. Freeman. ISBN 0-87901-432-6. == External links == Work–energy principle
Wikipedia/Work–energy_theorem
Specific energy or massic energy is energy per unit mass. It is also sometimes called gravimetric energy density, which is not to be confused with energy density, which is defined as energy per unit volume. It is used to quantify, for example, stored heat and other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties. The SI unit for specific energy is the joule per kilogram (J/kg). Other units still in use worldwide in some contexts are the kilocalorie per gram (Cal/g or kcal/g), mostly in food-related topics, and watt-hours per kilogram (W⋅h/kg) in the field of batteries. In some countries the Imperial unit BTU per pound (Btu/lb) is used in some engineering and applied technical fields. Specific energy has the same units as specific strength, which is related to the maximum specific energy of rotation an object can have without flying apart due to centrifugal force. The concept of specific energy is related to but distinct from the notion of molar energy in chemistry, that is energy per mole of a substance, which uses units such as joules per mole, or the older but still widely used calories per mole. == Table of some non-SI conversions == The following table shows the factors for conversion to J/kg of some non-SI units: For a table giving the specific energy of many different fuels as well as batteries, see the article Energy density. == Ionising radiation == For ionising radiation, the gray is the SI unit of specific energy absorbed by matter known as absorbed dose, from which the SI unit the sievert is calculated for the stochastic health effect on tissues, known as dose equivalent. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H." == Energy density of food == Energy density is the amount of energy per mass or volume of food. The energy density of a food can be determined from the label by dividing the energy per serving (usually in kilojoules or food calories) by the serving size (usually in grams, milliliters or fluid ounces). An energy unit commonly used in nutritional contexts within non-metric countries (e.g. the United States) is the "dietary calorie," "food calorie," or "Calorie" with a capital "C" and is commonly abbreviated as "Cal." A nutritional Calorie is equivalent to a thousand chemical or thermodynamic calories (abbreviated "cal" with a lower case "c") or one kilocalorie (kcal). Because food energy is commonly measured in Calories, the energy density of food is commonly called "caloric density". In the metric system, the energy unit commonly used on food labels is the kilojoule (kJ) or megajoule (MJ). Energy density is thus commonly expressed in metric units of cal/g, kcal/g, J/g, kJ/g, MJ/kg, cal/mL, kcal/mL, J/mL, or kJ/mL. Energy density measures the energy released when the food is metabolized by a healthy organism when it ingests the food (see food energy for calculation). In aerobic environments, this typically requires oxygen as an input and generates waste products such as carbon dioxide and water. Besides alcohol, the only sources of food energy are carbohydrates, fats and proteins, which make up ninety percent of the dry weight of food. Therefore, water content is the most important factor in computing energy density. In general, proteins have lower energy densities (≈16 kJ/g) than carbohydrates (≈17 kJ/g), whereas fats provide much higher energy densities (≈38 kJ/g), 2+1⁄4 times as much energy. Fats contain more carbon-carbon and carbon-hydrogen bonds than carbohydrates or proteins, yielding higher energy density. Foods that derive most of their energy from fat have a much higher energy density than those that derive most of their energy from carbohydrates or proteins, even if the water content is the same. Nutrients with a lower absorption, such as fiber or sugar alcohols, lower the energy density of foods as well. A moderate energy density would be 1.6 to 3 calories per gram (7–13 kJ/g); salmon, lean meat, and bread would fall in this category. Foods with high energy density have more than three calories per gram (>13 kJ/g) and include crackers, cheese, chocolate, nuts, and fried foods like potato or tortilla chips. == Fuel == Energy density is sometimes more useful than specific energy for comparing fuels. For example, liquid hydrogen fuel has a higher specific energy (energy per unit mass) than gasoline does, but a much lower volumetric energy density. == Astrodynamics == Specific mechanical energy, rather than simply energy, is often used in astrodynamics, because gravity changes the kinetic and potential specific energies of a vehicle in ways that are independent of the mass of the vehicle, consistent with the conservation of energy in a Newtonian gravitational system. The specific energy of an object such as a meteoroid falling on the Earth from outside the Earth's gravitational well is at least one half the square of the escape velocity of 11.2 km/s. This comes to 63 MJ/kg (15 kcal/g, or 15 tonnes TNT equivalent per tonne). Comets have even more energy, typically moving with respect to the Sun, when in our vicinity, at about the square root of two times the speed of the Earth. This comes to 42 km/s, or a specific energy of 882 MJ/kg. The speed relative to the Earth may be more or less, depending on direction. Since the speed of the Earth around the Sun is about 30 km/s, a comet's speed relative to the Earth can range from 12 to 72 km/s, the latter corresponding to 2592 MJ/kg. If a comet with this speed fell to the Earth it would gain another 63 MJ/kg, yielding a total of 2655 MJ/kg with a speed of 72.9 km/s. Since the equator is moving at about 0.5 km/s, the impact speed has an upper limit of 73.4 km/s, giving an upper limit for the specific energy of a comet hitting the Earth of about 2690 MJ/kg. If the Hale-Bopp comet (50 km in diameter) had hit Earth, it would have vaporized the oceans and sterilized the surface of Earth. == Miscellaneous == Kinetic energy per unit mass: ⁠1/2⁠v2, where v is the speed (giving J/kg when v is in m/s). See also kinetic energy per unit mass of projectiles. Potential energy with respect to gravity, close to Earth, per unit mass: gh, where g is the acceleration due to gravity (standardized as ≈9.8 m/s2) and h is the height above the reference level (giving J/kg when g is in m/s2 and h is in m). Heat: energies per unit mass are specific heat capacity times temperature difference, and specific melting heat, and specific heat of vaporization == See also == Energy density, which has tables of specific energies of devices and materials Power-to-weight ratio Heat of combustion Specific orbital energy Orders of magnitude (energy) == References == Çengel, Yunus A.; Turner, Robert H. (2005). Fundamentals of Thermal-Fluid Sciences. McGraw Hill. ISBN 0-07-297675-6.
Wikipedia/Specific_energy
The Course of Theoretical Physics is a ten-volume series of books covering theoretical physics that was initiated by Lev Landau and written in collaboration with his student Evgeny Lifshitz starting in the late 1930s. It is said that Landau composed much of the series in his head while in an NKVD prison in 1938–1939. However, almost all of the actual writing of the early volumes was done by Lifshitz, giving rise to the witticism, "not a word of Landau and not a thought of Lifshitz". The first eight volumes were finished in the 1950s, written in Russian and translated into English in the late 1950s by John Stewart Bell, together with John Bradbury Sykes, M. J. Kearsley, and W. H. Reid. The last two volumes were written in the early 1980s. Vladimir Berestetskii and Lev Pitaevskii also contributed to the series. The series is often referred to as "Landau and Lifshitz", "Landafshitz" (Russian: "Ландафшиц"), or "Lanlifshitz" (Russian: "Ланлифшиц") in informal settings. == Impact == The presentation of material is advanced and typically considered suitable for graduate-level study. Despite this specialized character, it is estimated that a million volumes of the Course were sold by 2005. The series has been called "renowned" in Science and "celebrated" in American Scientist. A note in Mathematical Reviews states, "The usefulness and the success of this course have been proved by the great number of successive editions in Russian, English, French, German and other languages." At a centenary celebration of Landau's career, it was observed that the Course had shown "unprecedented longevity." In 1962, Landau and Lifshitz were awarded the Lenin Prize for their work on the Course. This was the first occasion on which the Lenin Prize had been awarded for the teaching of physics. == English editions == The following list does not include reprints and revised editions. === Volume 1 === Landau, Lev D.; Lifshitz, Evgeny M. (1960). Mechanics. Vol. 1 (1st ed.). Pergamon Press. ASIN B0006AWV88. Landau, Lev D.; Lifshitz, Evgeny M. (1969). Mechanics. Vol. 1 (2nd ed.). Pergamon Press. ISBN 978-0-201-04146-0. Landau, Lev D.; Lifshitz, Evgeny M. (1976). Mechanics. Vol. 1 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2896-9. Volume 1 covers classical mechanics without special or general relativity, in the Lagrangian and Hamiltonian formalisms. === Volume 2 === Landau, Lev D.; Lifshitz, Evgeny M. (1951). The Classical Theory of Fields. Vol. 2 (1st ed.). Addison-Wesley. ASIN B0007G5B42. Landau, Lev D.; Lifshitz, Evgeny M. (1959). The Classical Theory of Fields. Vol. 2 (2nd ed.). Pergamon Press. Landau, Lev D.; Lifshitz, Evgeny M. (1971). The Classical Theory of Fields. Vol. 2 (3rd ed.). Pergamon Press. ISBN 978-0-08-016019-1. Landau, Lev D.; Lifshitz, Evgeny M. (1975). The Classical Theory of Fields. Vol. 2 (4th ed.). Butterworth-Heinemann. ISBN 978-0-7506-2768-9. Volume 2 covers relativistic mechanics of particles, and classical field theory for fields, specifically special relativity and electromagnetism, general relativity and gravitation. === Volume 3 === Landau, Lev D.; Lifshitz, Evgeny M. (1958). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (1st ed.). Pergamon Press. Landau, Lev D.; Lifshitz, Evgeny M. (1965). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (2nd ed.). Pergamon Press. Landau, Lev D.; Lifshitz, Evgeny M. (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. Volume 3 covers quantum mechanics without special relativity. === Volume 4 === Berestetskii, Vladimir B.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1971). Relativistic Quantum Theory. Vol. 4 (1st ed.). Pergamon Press. ISBN 978-0-08-017175-3. Berestetskii, Vladimir B.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1982). Quantum Electrodynamics. Vol. 4 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3371-0. The original edition comprised two books, labelled part 1 and part 2. The first covered general aspects of relativistic quantum mechanics and relativistic quantum field theory, leading onto quantum electrodynamics. The second continued with quantum electrodynamics and what was then known about the strong and weak interactions. These books were published in the early 1970s, at a time when the strong and weak forces were still not well understood. In the second edition, the corresponding sections were scrapped and replaced with more topics in the well-established quantum electrodynamics, and the two parts were unified into one, thus providing a one-volume exposition on relativistic quantum field theory with the electromagnetic interaction as the prototype of a quantum field theory. === Volume 5 === Statistical Physics. Vol. 5 (1st ed.). 1951. Early version: Landau, Lev D. (1938). Statistical Physics. Clarendon Press. ASIN B00085BKZG. Statistical Physics. Vol. 5 (2nd ed.). 1968. Landau, Lev D.; Lifshitz, Evgeny M. (1980). Statistical Physics. Vol. 5 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3372-7. Volume 5 covers general statistical mechanics and thermodynamics and applications, including chemical reactions, phase transitions, and condensed matter physics. === Volume 6 === Landau, Lev D.; Lifshitz, Evgeny M. (1959). Fluid Mechanics. Vol. 6 (1st ed.). Pergamon Press. ISBN 978-0-08-009104-4. {{cite book}}: ISBN / Date incompatibility (help) Landau, Lev D.; Lifshitz, Evgeny M. (1987). Fluid Mechanics. Vol. 6 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-033933-7. Volume 6 covers fluid mechanics in a condensed but varied exposition, from ideal to viscous fluids, includes a chapter on relativistic fluid mechanics, and another on superfluids. === Volume 7 === Landau, Lev D.; Lifshitz, Evgeny M. (1959). Theory of Elasticity. Vol. 7 (1st ed.). Pergamon Press. Landau, Lev D.; Lifshitz, Evgeny M. (1970). Theory of Elasticity. Vol. 7 (2nd ed.). Pergamon Press. ISBN 978-0-08-006465-9. Landau, Lev D.; Lifshitz, Evgeny M. (1986). Theory of Elasticity. Vol. 7 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2633-0. Volume 7 covers elasticity theory of solids, including viscous solids, vibrations and waves in crystals with dislocations, and a chapter on the mechanics of liquid crystals. === Volume 8 === Landau, Lev D.; Lifshitz, Evgeny M. (1960). Electrodynamics of Continuous Media. Vol. 8 (1st ed.). Pergamon Press. ISBN 978-0-08-009105-1. {{cite book}}: ISBN / Date incompatibility (help) Landau, Lev D.; Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1984). Electrodynamics of Continuous Media. Vol. 8 (2nd ed.). Butterworth-Heinemann. ISBN 978-0-7506-2634-7. Volume 8 covers electromagnetism in materials, and includes a variety of topics in condensed matter physics, a chapter on magnetohydrodynamics, and another on nonlinear optics. === Volume 9 === Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1980). Statistical Physics, Part 2: Theory of the Condensed State. Vol. 9 (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-2636-1. Volume 9 builds on the original statistical physics book, with more applications to condensed matter theory. === Volume 10 === Lifshitz, Evgeny M.; Pitaevskii, Lev P. (1981). Physical Kinetics. Vol. 10 (1st ed.). Pergamon Press. ISBN 978-0-7506-2635-4. Volume 10 presents various applications of kinetic theory to condensed matter theory, and to metals, insulators, and phase transitions. == See also == Lectures on Theoretical Physics List of textbooks on classical and quantum mechanics List of textbooks in thermodynamics and statistical mechanics List of textbooks in electromagnetism The Theoretical Minimum == Notes == == External links == Internet Archive: "Internet Archive". Retrieved 2013-11-02. (for volumes 1, 2, 3, 6, 7, 8) and "Internet Archive". Retrieved 2013-11-02. (for volume 4), and "Internet Archive". Internet Archive. 1969. Retrieved 2016-08-10. (for volume 5). Britannica Online: Course of Theoretical Physics Internet Archive: Landau-Lifschitz Vol. 1-10
Wikipedia/Course_of_Theoretical_Physics
In mathematics, the Legendre transformation (or Legendre transform), first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function. In physical problems, the Legendre transform is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of the conjugate quantity (momentum, volume, and entropy, respectively). In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism (or vice versa) and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables. For sufficiently smooth functions on the real line, the Legendre transform f ∗ {\displaystyle f^{*}} of a function f {\displaystyle f} can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed in Euler's derivative notation as D f ( ⋅ ) = ( D f ∗ ) − 1 ( ⋅ ) , {\displaystyle Df(\cdot )=\left(Df^{*}\right)^{-1}(\cdot )~,} where D {\displaystyle D} is an operator of differentiation, ⋅ {\displaystyle \cdot } represents an argument or input to the associated function, ( ϕ ) − 1 ( ⋅ ) {\displaystyle (\phi )^{-1}(\cdot )} is an inverse function such that ( ϕ ) − 1 ( ϕ ( x ) ) = x {\displaystyle (\phi )^{-1}(\phi (x))=x} , or equivalently, as f ′ ( f ∗ ′ ( x ∗ ) ) = x ∗ {\displaystyle f'(f^{*\prime }(x^{*}))=x^{*}} and f ∗ ′ ( f ′ ( x ) ) = x {\displaystyle f^{*\prime }(f'(x))=x} in Lagrange's notation. The generalization of the Legendre transformation to affine spaces and non-convex functions is known as the convex conjugate (also called the Legendre–Fenchel transformation), which can be used to construct a function's convex hull. == Definition == === Definition in one-dimensional real space === Let I ⊂ R {\displaystyle I\subset \mathbb {R} } be an interval, and f : I → R {\displaystyle f:I\to \mathbb {R} } a convex function; then the Legendre transform of f {\displaystyle f} is the function f ∗ : I ∗ → R {\displaystyle f^{*}:I^{*}\to \mathbb {R} } defined by f ∗ ( x ∗ ) = sup x ∈ I ( x ∗ x − f ( x ) ) , I ∗ = { x ∗ ∈ R : sup x ∈ I ( x ∗ x − f ( x ) ) < ∞ } {\displaystyle f^{*}(x^{*})=\sup _{x\in I}(x^{*}x-f(x)),\ \ \ \ I^{*}=\left\{x^{*}\in \mathbb {R} :\sup _{x\in I}(x^{*}x-f(x))<\infty \right\}} where sup {\textstyle \sup } denotes the supremum over I {\displaystyle I} , e.g., x {\textstyle x} in I {\textstyle I} is chosen such that x ∗ x − f ( x ) {\textstyle x^{*}x-f(x)} is maximized at each x ∗ {\textstyle x^{*}} , or x ∗ {\textstyle x^{*}} is such that x ∗ x − f ( x ) {\displaystyle x^{*}x-f(x)} has a bounded value throughout I {\textstyle I} (e.g., when f ( x ) {\displaystyle f(x)} is a linear function). The function f ∗ {\displaystyle f^{*}} is called the convex conjugate function of f {\displaystyle f} . For historical reasons (rooted in analytic mechanics), the conjugate variable is often denoted p {\displaystyle p} , instead of x ∗ {\displaystyle x^{*}} . If the convex function f {\displaystyle f} is defined on the whole line and is everywhere differentiable, then f ∗ ( p ) = sup x ∈ I ( p x − f ( x ) ) = ( p x − f ( x ) ) | x = ( f ′ ) − 1 ( p ) {\displaystyle f^{*}(p)=\sup _{x\in I}(px-f(x))=\left(px-f(x)\right)|_{x=(f')^{-1}(p)}} can be interpreted as the negative of the y {\displaystyle y} -intercept of the tangent line to the graph of f {\displaystyle f} that has slope p {\displaystyle p} . === Definition in n-dimensional real space === The generalization to convex functions f : X → R {\displaystyle f:X\to \mathbb {R} } on a convex set X ⊂ R n {\displaystyle X\subset \mathbb {R} ^{n}} is straightforward: f ∗ : X ∗ → R {\displaystyle f^{*}:X^{*}\to \mathbb {R} } has domain X ∗ = { x ∗ ∈ R n : sup x ∈ X ( ⟨ x ∗ , x ⟩ − f ( x ) ) < ∞ } {\displaystyle X^{*}=\left\{x^{*}\in \mathbb {R} ^{n}:\sup _{x\in X}(\langle x^{*},x\rangle -f(x))<\infty \right\}} and is defined by f ∗ ( x ∗ ) = sup x ∈ X ( ⟨ x ∗ , x ⟩ − f ( x ) ) , x ∗ ∈ X ∗ , {\displaystyle f^{*}(x^{*})=\sup _{x\in X}(\langle x^{*},x\rangle -f(x)),\quad x^{*}\in X^{*}~,} where ⟨ x ∗ , x ⟩ {\displaystyle \langle x^{*},x\rangle } denotes the dot product of x ∗ {\displaystyle x^{*}} and x {\displaystyle x} . The Legendre transformation is an application of the duality relationship between points and lines. The functional relationship specified by f {\displaystyle f} can be represented equally well as a set of ( x , y ) {\displaystyle (x,y)} points, or as a set of tangent lines specified by their slope and intercept values. === Understanding the Legendre transform in terms of derivatives === For a differentiable convex function f {\displaystyle f} on the real line with the first derivative f ′ {\displaystyle f'} and its inverse ( f ′ ) − 1 {\displaystyle (f')^{-1}} , the Legendre transform of f {\displaystyle f} , f ∗ {\displaystyle f^{*}} , can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other, i.e., f ′ = ( ( f ∗ ) ′ ) − 1 {\displaystyle f'=((f^{*})')^{-1}} and ( f ∗ ) ′ = ( f ′ ) − 1 {\displaystyle (f^{*})'=(f')^{-1}} . To see this, first note that if f {\displaystyle f} as a convex function on the real line is differentiable and x ¯ {\displaystyle {\overline {x}}} is a critical point of the function of x ↦ p ⋅ x − f ( x ) {\displaystyle x\mapsto p\cdot x-f(x)} , then the supremum is achieved at x ¯ {\textstyle {\overline {x}}} (by convexity, see the first figure in this Wikipedia page). Therefore, the Legendre transform of f {\displaystyle f} is f ∗ ( p ) = p ⋅ x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p\cdot {\overline {x}}-f({\overline {x}})} . Then, suppose that the first derivative f ′ {\displaystyle f'} is invertible and let the inverse be g = ( f ′ ) − 1 {\displaystyle g=(f')^{-1}} . Then for each p {\textstyle p} , the point g ( p ) {\displaystyle g(p)} is the unique critical point x ¯ {\textstyle {\overline {x}}} of the function x ↦ p x − f ( x ) {\displaystyle x\mapsto px-f(x)} (i.e., x ¯ = g ( p ) {\displaystyle {\overline {x}}=g(p)} ) because f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} and the function's first derivative with respect to x {\displaystyle x} at g ( p ) {\displaystyle g(p)} is p − f ′ ( g ( p ) ) = 0 {\displaystyle p-f'(g(p))=0} . Hence we have f ∗ ( p ) = p ⋅ g ( p ) − f ( g ( p ) ) {\displaystyle f^{*}(p)=p\cdot g(p)-f(g(p))} for each p {\textstyle p} . By differentiating with respect to p {\textstyle p} , we find ( f ∗ ) ′ ( p ) = g ( p ) + p ⋅ g ′ ( p ) − f ′ ( g ( p ) ) ⋅ g ′ ( p ) . {\displaystyle (f^{*})'(p)=g(p)+p\cdot g'(p)-f'(g(p))\cdot g'(p).} Since f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} this simplifies to ( f ∗ ) ′ ( p ) = g ( p ) = ( f ′ ) − 1 ( p ) {\displaystyle (f^{*})'(p)=g(p)=(f')^{-1}(p)} . In other words, ( f ∗ ) ′ {\displaystyle (f^{*})'} and f ′ {\displaystyle f'} are inverses to each other. In general, if h ′ = ( f ′ ) − 1 {\displaystyle h'=(f')^{-1}} as the inverse of f ′ , {\displaystyle f',} then h ′ = ( f ∗ ) ′ {\displaystyle h'=(f^{*})'} so integration gives f ∗ = h + c . {\displaystyle f^{*}=h+c.} with a constant c . {\displaystyle c.} In practical terms, given f ( x ) , {\displaystyle f(x),} the parametric plot of x f ′ ( x ) − f ( x ) {\displaystyle xf'(x)-f(x)} versus f ′ ( x ) {\displaystyle f'(x)} amounts to the graph of f ∗ ( p ) {\displaystyle f^{*}(p)} versus p . {\displaystyle p.} In some cases (e.g. thermodynamic potentials, below), a non-standard requirement is used, amounting to an alternative definition of f * with a minus sign, f ( x ) − f ∗ ( p ) = x p . {\displaystyle f(x)-f^{*}(p)=xp.} === Formal definition in physics context === In analytical mechanics and thermodynamics, Legendre transformation is usually defined as follows: suppose f {\displaystyle f} is a function of x {\displaystyle x} ; then we have d f = d f d x d x . {\displaystyle \mathrm {d} f={\frac {\mathrm {d} f}{\mathrm {d} x}}\mathrm {d} x.} Performing the Legendre transformation on this function means that we take p = d f d x {\displaystyle p={\frac {\mathrm {d} f}{\mathrm {d} x}}} as the independent variable, so that the above expression can be written as d f = p d x , {\displaystyle \mathrm {d} f=p\mathrm {d} x,} and according to Leibniz's rule d ( u v ) = u d v + v d u , {\displaystyle \mathrm {d} (uv)=u\mathrm {d} v+v\mathrm {d} u,} we then have d ( x p − f ) = x d p + p d x − d f = x d p , {\displaystyle \mathrm {d} \left(xp-f\right)=x\mathrm {d} p+p\mathrm {d} x-\mathrm {d} f=x\mathrm {d} p,} and taking f ∗ = x p − f , {\displaystyle f^{*}=xp-f,} we have d f ∗ = x d p , {\displaystyle \mathrm {d} f^{*}=x\mathrm {d} p,} which means d f ∗ d p = x . {\displaystyle {\frac {\mathrm {d} f^{*}}{\mathrm {d} p}}=x.} When f {\displaystyle f} is a function of n {\displaystyle n} variables x 1 , x 2 , ⋯ , x n {\displaystyle x_{1},x_{2},\cdots ,x_{n}} , then we can perform the Legendre transformation on each one or several variables: we have d f = p 1 d x 1 + p 2 d x 2 + ⋯ + p n d x n , {\displaystyle \mathrm {d} f=p_{1}\mathrm {d} x_{1}+p_{2}\mathrm {d} x_{2}+\cdots +p_{n}\mathrm {d} x_{n},} where p i = ∂ f ∂ x i . {\displaystyle p_{i}={\frac {\partial f}{\partial x_{i}}}.} Then if we want to perform the Legendre transformation on, e.g. x 1 {\displaystyle x_{1}} , then we take p 1 {\displaystyle p_{1}} together with x 2 , ⋯ , x n {\displaystyle x_{2},\cdots ,x_{n}} as independent variables, and with Leibniz's rule we have d ( f − x 1 p 1 ) = − x 1 d p 1 + p 2 d x 2 + ⋯ + p n d x n . {\displaystyle \mathrm {d} (f-x_{1}p_{1})=-x_{1}\mathrm {d} p_{1}+p_{2}\mathrm {d} x_{2}+\cdots +p_{n}\mathrm {d} x_{n}.} So for the function φ ( p 1 , x 2 , ⋯ , x n ) = f ( x 1 , x 2 , ⋯ , x n ) − x 1 p 1 , {\displaystyle \varphi (p_{1},x_{2},\cdots ,x_{n})=f(x_{1},x_{2},\cdots ,x_{n})-x_{1}p_{1},} we have ∂ φ ∂ p 1 = − x 1 , ∂ φ ∂ x 2 = p 2 , ⋯ , ∂ φ ∂ x n = p n . {\displaystyle {\frac {\partial \varphi }{\partial p_{1}}}=-x_{1},\quad {\frac {\partial \varphi }{\partial x_{2}}}=p_{2},\quad \cdots ,\quad {\frac {\partial \varphi }{\partial x_{n}}}=p_{n}.} We can also do this transformation for variables x 2 , ⋯ , x n {\displaystyle x_{2},\cdots ,x_{n}} . If we do it to all the variables, then we have d φ = − x 1 d p 1 − x 2 d p 2 − ⋯ − x n d p n {\displaystyle \mathrm {d} \varphi =-x_{1}\mathrm {d} p_{1}-x_{2}\mathrm {d} p_{2}-\cdots -x_{n}\mathrm {d} p_{n}} where φ = f − x 1 p 1 − x 2 p 2 − ⋯ − x n p n . {\displaystyle \varphi =f-x_{1}p_{1}-x_{2}p_{2}-\cdots -x_{n}p_{n}.} In analytical mechanics, people perform this transformation on variables q ˙ 1 , q ˙ 2 , ⋯ , q ˙ n {\displaystyle {\dot {q}}_{1},{\dot {q}}_{2},\cdots ,{\dot {q}}_{n}} of the Lagrangian L ( q 1 , ⋯ , q n , q ˙ 1 , ⋯ , q ˙ n ) {\displaystyle L(q_{1},\cdots ,q_{n},{\dot {q}}_{1},\cdots ,{\dot {q}}_{n})} to get the Hamiltonian: H ( q 1 , ⋯ , q n , p 1 , ⋯ , p n ) = ∑ i = 1 n p i q ˙ i − L ( q 1 , ⋯ , q n , q ˙ 1 ⋯ , q ˙ n ) . {\displaystyle H(q_{1},\cdots ,q_{n},p_{1},\cdots ,p_{n})=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}-L(q_{1},\cdots ,q_{n},{\dot {q}}_{1}\cdots ,{\dot {q}}_{n}).} In thermodynamics, people perform this transformation on variables according to the type of thermodynamic system they want; for example, starting from the cardinal function of state, the internal energy U ( S , V ) {\displaystyle U(S,V)} , we have d U = T d S − p d V , {\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V,} so we can perform the Legendre transformation on either or both of S , V {\displaystyle S,V} to yield d H = d ( U + p V ) = T d S + V d p {\displaystyle \mathrm {d} H=\mathrm {d} (U+pV)\ \ \ \ \ \ \ \ \ \ =\ \ \ \ T\mathrm {d} S+V\mathrm {d} p} d F = d ( U − T S ) = − S d T − p d V {\displaystyle \mathrm {d} F=\mathrm {d} (U-TS)\ \ \ \ \ \ \ \ \ \ =-S\mathrm {d} T-p\mathrm {d} V} d G = d ( U − T S + p V ) = − S d T + V d p , {\displaystyle \mathrm {d} G=\mathrm {d} (U-TS+pV)=-S\mathrm {d} T+V\mathrm {d} p,} and each of these three expressions has a physical meaning. This definition of the Legendre transformation is the one originally introduced by Legendre in his work in 1787, and is still applied by physicists nowadays. Indeed, this definition can be mathematically rigorous if we treat all the variables and functions defined above: for example, f , x 1 , ⋯ , x n , p 1 , ⋯ , p n , {\displaystyle f,x_{1},\cdots ,x_{n},p_{1},\cdots ,p_{n},} as differentiable functions defined on an open set of R n {\displaystyle \mathbb {R} ^{n}} or on a differentiable manifold, and d f , d x i , d p i {\displaystyle \mathrm {d} f,\mathrm {d} x_{i},\mathrm {d} p_{i}} their differentials (which are treated as cotangent vector field in the context of differentiable manifold). This definition is equivalent to the modern mathematicians' definition as long as f {\displaystyle f} is differentiable and convex for the variables x 1 , x 2 , ⋯ , x n . {\displaystyle x_{1},x_{2},\cdots ,x_{n}.} == Properties == The Legendre transform of a convex function, of which double derivative values are all positive, is also a convex function of which double derivative values are all positive.Proof. Let us show this with a doubly differentiable function f ( x ) {\displaystyle f(x)} with all positive double derivative values and with a bijective (invertible) derivative. For a fixed p {\displaystyle p} , let x ¯ {\displaystyle {\bar {x}}} maximize or make the function p x − f ( x ) {\displaystyle px-f(x)} bounded over x {\displaystyle x} . Then the Legendre transformation of f {\displaystyle f} is f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} , thus, f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} by the maximizing or bounding condition d d x ( p x − f ( x ) ) = p − f ′ ( x ) = 0 {\displaystyle {\frac {d}{dx}}(px-f(x))=p-f'(x)=0} . Note that x ¯ {\displaystyle {\bar {x}}} depends on p {\displaystyle p} . (This can be visually shown in the 1st figure of this page above.) Thus x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} where g ≡ ( f ′ ) − 1 {\displaystyle g\equiv (f')^{-1}} , meaning that g {\displaystyle g} is the inverse of f ′ {\displaystyle f'} that is the derivative of f {\displaystyle f} (so f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} ). Note that g {\displaystyle g} is also differentiable with the following derivative (Inverse function rule), d g ( p ) d p = 1 f ″ ( g ( p ) ) . {\displaystyle {\frac {dg(p)}{dp}}={\frac {1}{f''(g(p))}}~.} Thus, the Legendre transformation f ∗ ( p ) = p g ( p ) − f ( g ( p ) ) {\displaystyle f^{*}(p)=pg(p)-f(g(p))} is the composition of differentiable functions, hence it is differentiable. Applying the product rule and the chain rule with the found equality x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} yields d ( f ∗ ) d p = g ( p ) + ( p − f ′ ( g ( p ) ) ) ⋅ d g ( p ) d p = g ( p ) , {\displaystyle {\frac {d(f^{*})}{dp}}=g(p)+\left(p-f'(g(p))\right)\cdot {\frac {dg(p)}{dp}}=g(p),} giving d 2 ( f ∗ ) d p 2 = d g ( p ) d p = 1 f ″ ( g ( p ) ) > 0 , {\displaystyle {\frac {d^{2}(f^{*})}{dp^{2}}}={\frac {dg(p)}{dp}}={\frac {1}{f''(g(p))}}>0,} so f ∗ {\displaystyle f^{*}} is convex with its double derivatives are all positive. The Legendre transformation is an involution, i.e., f ∗ ∗ = f {\displaystyle f^{**}=f~} . Proof. By using the above identities as f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} , x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} , f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} and its derivative ( f ∗ ) ′ ( p ) = g ( p ) {\displaystyle (f^{*})'(p)=g(p)} , f ∗ ∗ ( y ) = ( y ⋅ p ¯ − f ∗ ( p ¯ ) ) | ( f ∗ ) ′ ( p ¯ ) = y = g ( p ¯ ) ⋅ p ¯ − f ∗ ( p ¯ ) = g ( p ¯ ) ⋅ p ¯ − ( p ¯ g ( p ¯ ) − f ( g ( p ¯ ) ) ) = f ( g ( p ¯ ) ) = f ( y ) . {\displaystyle {\begin{aligned}f^{**}(y)&{}=\left(y\cdot {\bar {p}}-f^{*}({\bar {p}})\right)|_{(f^{*})'({\bar {p}})=y}\\[5pt]&{}=g({\bar {p}})\cdot {\bar {p}}-f^{*}({\bar {p}})\\[5pt]&{}=g({\bar {p}})\cdot {\bar {p}}-({\bar {p}}g({\bar {p}})-f(g({\bar {p}})))\\[5pt]&{}=f(g({\bar {p}}))\\[5pt]&{}=f(y)~.\end{aligned}}} Note that this derivation does not require the condition to have all positive values in double derivative of the original function f {\displaystyle f} . == Identities == As shown above, for a convex function f ( x ) {\displaystyle f(x)} , with x = x ¯ {\displaystyle x={\bar {x}}} maximizing or making p x − f ( x ) {\displaystyle px-f(x)} bounded at each p {\displaystyle p} to define the Legendre transform f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} and with g ≡ ( f ′ ) − 1 {\displaystyle g\equiv (f')^{-1}} , the following identities hold. f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} , x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} , ( f ∗ ) ′ ( p ) = g ( p ) {\displaystyle (f^{*})'(p)=g(p)} . == Examples == === Example 1 === Consider the exponential function f ( x ) = e x , {\displaystyle f(x)=e^{x},} which has the domain I = R {\displaystyle I=\mathbb {R} } . From the definition, the Legendre transform is f ∗ ( x ∗ ) = sup x ∈ R ( x ∗ x − e x ) , x ∗ ∈ I ∗ {\displaystyle f^{*}(x^{*})=\sup _{x\in \mathbb {R} }(x^{*}x-e^{x}),\quad x^{*}\in I^{*}} where I ∗ {\displaystyle I^{*}} remains to be determined. To evaluate the supremum, compute the derivative of x ∗ x − e x {\displaystyle x^{*}x-e^{x}} with respect to x {\displaystyle x} and set equal to zero: d d x ( x ∗ x − e x ) = x ∗ − e x = 0. {\displaystyle {\frac {d}{dx}}(x^{*}x-e^{x})=x^{*}-e^{x}=0.} The second derivative − e x {\displaystyle -e^{x}} is negative everywhere, so the maximal value is achieved at x = ln ⁡ ( x ∗ ) {\displaystyle x=\ln(x^{*})} . Thus, the Legendre transform is f ∗ ( x ∗ ) = x ∗ ln ⁡ ( x ∗ ) − e ln ⁡ ( x ∗ ) = x ∗ ( ln ⁡ ( x ∗ ) − 1 ) {\displaystyle f^{*}(x^{*})=x^{*}\ln(x^{*})-e^{\ln(x^{*})}=x^{*}(\ln(x^{*})-1)} and has domain I ∗ = ( 0 , ∞ ) . {\displaystyle I^{*}=(0,\infty ).} This illustrates that the domains of a function and its Legendre transform can be different. To find the Legendre transformation of the Legendre transformation of f {\displaystyle f} , f ∗ ∗ ( x ) = sup x ∗ ∈ R ( x x ∗ − x ∗ ( ln ⁡ ( x ∗ ) − 1 ) ) , x ∈ I , {\displaystyle f^{**}(x)=\sup _{x^{*}\in \mathbb {R} }(xx^{*}-x^{*}(\ln(x^{*})-1)),\quad x\in I,} where a variable x {\displaystyle x} is intentionally used as the argument of the function f ∗ ∗ {\displaystyle f^{**}} to show the involution property of the Legendre transform as f ∗ ∗ = f {\displaystyle f^{**}=f} . we compute 0 = d d x ∗ ( x x ∗ − x ∗ ( ln ⁡ ( x ∗ ) − 1 ) ) = x − ln ⁡ ( x ∗ ) {\displaystyle {\begin{aligned}0&={\frac {d}{dx^{*}}}{\big (}xx^{*}-x^{*}(\ln(x^{*})-1){\big )}=x-\ln(x^{*})\end{aligned}}} thus the maximum occurs at x ∗ = e x {\displaystyle x^{*}=e^{x}} because the second derivative d 2 d x ∗ 2 f ∗ ∗ ( x ) = − 1 x ∗ < 0 {\displaystyle {\frac {d^{2}}{{dx^{*}}^{2}}}f^{**}(x)=-{\frac {1}{x^{*}}}<0} over the domain of f ∗ ∗ {\displaystyle f^{**}} as I ∗ = ( 0 , ∞ ) . {\displaystyle I^{*}=(0,\infty ).} As a result, f ∗ ∗ {\displaystyle f^{**}} is found as f ∗ ∗ ( x ) = x e x − e x ( ln ⁡ ( e x ) − 1 ) = e x , {\displaystyle {\begin{aligned}f^{**}(x)&=xe^{x}-e^{x}(\ln(e^{x})-1)=e^{x},\end{aligned}}} thereby confirming that f = f ∗ ∗ , {\displaystyle f=f^{**},} as expected. === Example 2 === Let f(x) = cx2 defined on R, where c > 0 is a fixed constant. For x* fixed, the function of x, x*x − f(x) = x*x − cx2 has the first derivative x* − 2cx and second derivative −2c; there is one stationary point at x = x*/2c, which is always a maximum. Thus, I* = R and f ∗ ( x ∗ ) = x ∗ 2 4 c . {\displaystyle f^{*}(x^{*})={\frac {{x^{*}}^{2}}{4c}}~.} The first derivatives of f, 2cx, and of f *, x*/(2c), are inverse functions to each other. Clearly, furthermore, f ∗ ∗ ( x ) = 1 4 ( 1 / 4 c ) x 2 = c x 2 , {\displaystyle f^{**}(x)={\frac {1}{4(1/4c)}}x^{2}=cx^{2}~,} namely f ** = f. === Example 3 === Let f(x) = x2 for x ∈ (I = [2, 3]). For x* fixed, x*x − f(x) is continuous on I compact, hence it always takes a finite maximum on it; it follows that the domain of the Legendre transform of f {\displaystyle f} is I* = R. The stationary point at x = x*/2 (found by setting that the first derivative of x*x − f(x) with respect to x {\displaystyle x} equal to zero) is in the domain [2, 3] if and only if 4 ≤ x* ≤ 6. Otherwise the maximum is taken either at x = 2 or x = 3 because the second derivative of x*x − f(x) with respect to x {\displaystyle x} is negative as − 2 {\displaystyle -2} ; for a part of the domain x ∗ < 4 {\displaystyle x^{*}<4} the maximum that x*x − f(x) can take with respect to x ∈ [ 2 , 3 ] {\displaystyle x\in [2,3]} is obtained at x = 2 {\displaystyle x=2} while for x ∗ > 6 {\displaystyle x^{*}>6} it becomes the maximum at x = 3 {\displaystyle x=3} . Thus, it follows that f ∗ ( x ∗ ) = { 2 x ∗ − 4 , x ∗ < 4 x ∗ 2 4 , 4 ≤ x ∗ ≤ 6 , 3 x ∗ − 9 , x ∗ > 6. {\displaystyle f^{*}(x^{*})={\begin{cases}2x^{*}-4,&x^{*}<4\\{\frac {{x^{*}}^{2}}{4}},&4\leq x^{*}\leq 6,\\3x^{*}-9,&x^{*}>6.\end{cases}}} === Example 4 === The function f(x) = cx is convex, for every x (strict convexity is not required for the Legendre transformation to be well defined). Clearly x*x − f(x) = (x* − c)x is never bounded from above as a function of x, unless x* − c = 0. Hence f* is defined on I* = {c} and f*(c) = 0. (The definition of the Legendre transform requires the existence of the supremum, that requires upper bounds.) One may check involutivity: of course, x*x − f*(x*) is always bounded as a function of x*∈{c}, hence I** = R. Then, for all x one has sup x ∗ ∈ { c } ( x x ∗ − f ∗ ( x ∗ ) ) = x c , {\displaystyle \sup _{x^{*}\in \{c\}}(xx^{*}-f^{*}(x^{*}))=xc,} and hence f **(x) = cx = f(x). === Example 5 === As an example of a convex continuous function that is not everywhere differentiable, consider f ( x ) = | x | {\displaystyle f(x)=|x|} . This gives f ∗ ( x ∗ ) = sup x ( x x ∗ − | x | ) = max ( sup x ≥ 0 x ( x ∗ − 1 ) , sup x ≤ 0 x ( x ∗ + 1 ) ) , {\displaystyle f^{*}(x^{*})=\sup _{x}(xx^{*}-|x|)=\max \left(\sup _{x\geq 0}x(x^{*}-1),\,\sup _{x\leq 0}x(x^{*}+1)\right),} and thus f ∗ ( x ∗ ) = 0 {\displaystyle f^{*}(x^{*})=0} on its domain I ∗ = [ − 1 , 1 ] {\displaystyle I^{*}=[-1,1]} . === Example 6: several variables === Let f ( x ) = ⟨ x , A x ⟩ + c {\displaystyle f(x)=\langle x,Ax\rangle +c} be defined on X = Rn, where A is a real, positive definite matrix. Then f is convex, and ⟨ p , x ⟩ − f ( x ) = ⟨ p , x ⟩ − ⟨ x , A x ⟩ − c , {\displaystyle \langle p,x\rangle -f(x)=\langle p,x\rangle -\langle x,Ax\rangle -c,} has gradient p − 2Ax and Hessian −2A, which is negative; hence the stationary point x = A−1p/2 is a maximum. We have X* = Rn, and f ∗ ( p ) = 1 4 ⟨ p , A − 1 p ⟩ − c . {\displaystyle f^{*}(p)={\frac {1}{4}}\langle p,A^{-1}p\rangle -c.} == Behavior of differentials under Legendre transforms == The Legendre transform is linked to integration by parts, p dx = d(px) − x dp. Let f(x,y) be a function of two independent variables x and y, with the differential d f = ∂ f ∂ x d x + ∂ f ∂ y d y = p d x + v d y . {\displaystyle df={\frac {\partial f}{\partial x}}\,dx+{\frac {\partial f}{\partial y}}\,dy=p\,dx+v\,dy.} Assume that the function f is convex in x for all y, so that one may perform the Legendre transform on f in x, with p the variable conjugate to x (for information, there is a relation ∂ f ∂ x | x ¯ = p {\displaystyle {\frac {\partial f}{\partial x}}|_{\bar {x}}=p} where x ¯ {\displaystyle {\bar {x}}} is a point in x maximizing or making p x − f ( x , y ) {\displaystyle px-f(x,y)} bounded for given p and y). Since the new independent variable of the transform with respect to f is p, the differentials dx and dy in df devolve to dp and dy in the differential of the transform, i.e., we build another function with its differential expressed in terms of the new basis dp and dy. We thus consider the function g(p, y) = f − px so that d g = d f − p d x − x d p = − x d p + v d y {\displaystyle dg=df-p\,dx-x\,dp=-x\,dp+v\,dy} x = − ∂ g ∂ p {\displaystyle x=-{\frac {\partial g}{\partial p}}} v = ∂ g ∂ y . {\displaystyle v={\frac {\partial g}{\partial y}}.} The function −g(p, y) is the Legendre transform of f(x, y), where only the independent variable x has been supplanted by p. This is widely used in thermodynamics, as illustrated below. == Applications == === Analytical mechanics === A Legendre transform is used in classical mechanics to derive the Hamiltonian formulation from the Lagrangian formulation, and conversely. A typical Lagrangian has the form L ( v , q ) = 1 2 ⟨ v , M v ⟩ − V ( q ) , {\displaystyle L(v,q)={\tfrac {1}{2}}\langle v,Mv\rangle -V(q),} where ( v , q ) {\displaystyle (v,q)} are coordinates on Rn × Rn, M is a positive definite real matrix, and ⟨ x , y ⟩ = ∑ j x j y j . {\displaystyle \langle x,y\rangle =\sum _{j}x_{j}y_{j}.} For every q fixed, L ( v , q ) {\displaystyle L(v,q)} is a convex function of v {\displaystyle v} , while V ( q ) {\displaystyle V(q)} plays the role of a constant. Hence the Legendre transform of L ( v , q ) {\displaystyle L(v,q)} as a function of v {\displaystyle v} is the Hamiltonian function, H ( p , q ) = 1 2 ⟨ p , M − 1 p ⟩ + V ( q ) . {\displaystyle H(p,q)={\tfrac {1}{2}}\langle p,M^{-1}p\rangle +V(q).} In a more general setting, ( v , q ) {\displaystyle (v,q)} are local coordinates on the tangent bundle T M {\displaystyle T{\mathcal {M}}} of a manifold M {\displaystyle {\mathcal {M}}} . For each q, L ( v , q ) {\displaystyle L(v,q)} is a convex function of the tangent space Vq. The Legendre transform gives the Hamiltonian H ( p , q ) {\displaystyle H(p,q)} as a function of the coordinates (p, q) of the cotangent bundle T ∗ M {\displaystyle T^{*}{\mathcal {M}}} ; the inner product used to define the Legendre transform is inherited from the pertinent canonical symplectic structure. In this abstract setting, the Legendre transformation corresponds to the tautological one-form. === Thermodynamics === The strategy behind the use of Legendre transforms in thermodynamics is to shift from a function that depends on a variable to a new (conjugate) function that depends on a new variable, the conjugate of the original one. The new variable is the partial derivative of the original function with respect to the original variable. The new function is the difference between the original function and the product of the old and new variables. Typically, this transformation is useful because it shifts the dependence of, e.g., the energy from an extensive variable to its conjugate intensive variable, which can often be controlled more easily in a physical experiment. For example, the internal energy U is an explicit function of the extensive variables entropy S, volume V, and chemical composition Ni (e.g., i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\ldots } ) U = U ( S , V , { N i } ) , {\displaystyle U=U\left(S,V,\{N_{i}\}\right),} which has a total differential d U = T d S − P d V + ∑ μ i d N i {\displaystyle dU=T\,dS-P\,dV+\sum \mu _{i}\,dN_{i}} where T = ∂ U ∂ S | V , N i f o r a l l i v a l u e s , P = − ∂ U ∂ V | S , N i f o r a l l i v a l u e s , μ i = ∂ U ∂ N i | S , V , N j f o r a l l j ≠ i {\displaystyle T=\left.{\frac {\partial U}{\partial S}}\right\vert _{V,N_{i\ for\ all\ i\ values}},P=\left.-{\frac {\partial U}{\partial V}}\right\vert _{S,N_{i\ for\ all\ i\ values}},\mu _{i}=\left.{\frac {\partial U}{\partial N_{i}}}\right\vert _{S,V,N_{j\ for\ all\ j\neq i}}} . (Subscripts are not necessary by the definition of partial derivatives but left here for clarifying variables.) Stipulating some common reference state, by using the (non-standard) Legendre transform of the internal energy U with respect to volume V, the enthalpy H may be obtained as the following. To get the (standard) Legendre transform U ∗ {\textstyle U^{*}} of the internal energy U with respect to volume V, the function u ( p , S , V , { N i } ) = p V − U {\textstyle u\left(p,S,V,\{{{N}_{i}}\}\right)=pV-U} is defined first, then it shall be maximized or bounded by V. To do this, the condition ∂ u ∂ V = p − ∂ U ∂ V = 0 → p = ∂ U ∂ V {\textstyle {\frac {\partial u}{\partial V}}=p-{\frac {\partial U}{\partial V}}=0\to p={\frac {\partial U}{\partial V}}} needs to be satisfied, so U ∗ = ∂ U ∂ V V − U {\textstyle U^{*}={\frac {\partial U}{\partial V}}V-U} is obtained. This approach is justified because U is a linear function with respect to V (so a convex function on V) by the definition of extensive variables. The non-standard Legendre transform here is obtained by negating the standard version, so − U ∗ = H = U − ∂ U ∂ V V = U + P V {\textstyle -U^{*}=H=U-{\frac {\partial U}{\partial V}}V=U+PV} . H is definitely a state function as it is obtained by adding PV (P and V as state variables) to a state function U = U ( S , V , { N i } ) {\textstyle U=U\left(S,V,\{N_{i}\}\right)} , so its differential is an exact differential. Because of d H = T d S + V d P + ∑ μ i d N i {\textstyle dH=T\,dS+V\,dP+\sum \mu _{i}\,dN_{i}} and the fact that it must be an exact differential, H = H ( S , P , { N i } ) {\displaystyle H=H(S,P,\{N_{i}\})} . The enthalpy is suitable for description of processes in which the pressure is controlled from the surroundings. It is likewise possible to shift the dependence of the energy from the extensive variable of entropy, S, to the (often more convenient) intensive variable T, resulting in the Helmholtz and Gibbs free energies. The Helmholtz free energy A, and Gibbs energy G, are obtained by performing Legendre transforms of the internal energy and enthalpy, respectively, A = U − T S , {\displaystyle A=U-TS~,} G = H − T S = U + P V − T S . {\displaystyle G=H-TS=U+PV-TS~.} The Helmholtz free energy is often the most useful thermodynamic potential when temperature and volume are controlled from the surroundings, while the Gibbs energy is often the most useful when temperature and pressure are controlled from the surroundings. === Variable capacitor === As another example from physics, consider a parallel conductive plate capacitor, in which the plates can move relative to one another. Such a capacitor would allow transfer of the electric energy which is stored in the capacitor into external mechanical work, done by the force acting on the plates. One may think of the electric charge as analogous to the "charge" of a gas in a cylinder, with the resulting mechanical force exerted on a piston. Compute the force on the plates as a function of x, the distance which separates them. To find the force, compute the potential energy, and then apply the definition of force as the gradient of the potential energy function. The electrostatic potential energy stored in a capacitor of the capacitance C(x) and a positive electric charge +Q or negative charge -Q on each conductive plate is (with using the definition of the capacitance as C = Q V {\textstyle C={\frac {Q}{V}}} ), U ( Q , x ) = 1 2 Q V ( Q , x ) = 1 2 Q 2 C ( x ) , {\displaystyle U(Q,\mathbf {x} )={\frac {1}{2}}QV(Q,\mathbf {x} )={\frac {1}{2}}{\frac {Q^{2}}{C(\mathbf {x} )}},~} where the dependence on the area of the plates, the dielectric constant of the insulation material between the plates, and the separation x are abstracted away as the capacitance C(x). (For a parallel plate capacitor, this is proportional to the area of the plates and inversely proportional to the separation.) The force F between the plates due to the electric field created by the charge separation is then F ( x ) = − d U d x . {\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU}{d\mathbf {x} }}~.} If the capacitor is not connected to any electric circuit, then the electric charges on the plates remain constant and the voltage varies when the plates move with respect to each other, and the force is the negative gradient of the electrostatic potential energy as F ( x ) = 1 2 d C ( x ) d x Q 2 C ( x ) 2 = 1 2 d C ( x ) d x V ( x ) 2 {\displaystyle \mathbf {F} (\mathbf {x} )={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}{\frac {Q^{2}}{{C(\mathbf {x} )}^{2}}}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V(\mathbf {x} )^{2}} where V ( Q , x ) = V ( x ) {\textstyle V(Q,\mathbf {x} )=V(\mathbf {x} )} as the charge is fixed in this configuration. However, instead, suppose that the voltage between the plates V is maintained constant as the plate moves by connection to a battery, which is a reservoir for electric charges at a constant potential difference. Then the amount of charges Q {\textstyle Q} is a variable instead of the voltage; Q {\textstyle Q} and V {\textstyle V} are the Legendre conjugate to each other. To find the force, first compute the non-standard Legendre transform U ∗ {\textstyle U^{*}} with respect to Q {\textstyle Q} (also with using C = Q V {\textstyle C={\frac {Q}{V}}} ), U ∗ = U − ∂ U ∂ Q | x ⋅ Q = U − 1 2 C ( x ) ∂ Q 2 ∂ Q | x ⋅ Q = U − Q V = 1 2 Q V − Q V = − 1 2 Q V = − 1 2 V 2 C ( x ) . {\displaystyle U^{*}=U-\left.{\frac {\partial U}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-{\frac {1}{2C(\mathbf {x} )}}\left.{\frac {\partial Q^{2}}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-QV={\frac {1}{2}}QV-QV=-{\frac {1}{2}}QV=-{\frac {1}{2}}V^{2}C(\mathbf {x} ).} This transformation is possible because U {\textstyle U} is now a linear function of Q {\textstyle Q} so is convex on it. The force now becomes the negative gradient of this Legendre transform, resulting in the same force obtained from the original function U {\textstyle U} , F ( x ) = − d U ∗ d x = 1 2 d C ( x ) d x V 2 . {\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU^{*}}{d\mathbf {x} }}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V^{2}.} The two conjugate energies U {\textstyle U} and U ∗ {\textstyle U^{*}} happen to stand opposite to each other (their signs are opposite), only because of the linearity of the capacitance—except now Q is no longer a constant. They reflect the two different pathways of storing energy into the capacitor, resulting in, for instance, the same "pull" between a capacitor's plates. === Probability theory === In large deviations theory, the rate function is defined as the Legendre transformation of the logarithm of the moment generating function of a random variable. An important application of the rate function is in the calculation of tail probabilities of sums of i.i.d. random variables, in particular in Cramér's theorem. If X n {\displaystyle X_{n}} are i.i.d. random variables, let S n = X 1 + ⋯ + X n {\displaystyle S_{n}=X_{1}+\cdots +X_{n}} be the associated random walk and M ( ξ ) {\displaystyle M(\xi )} the moment generating function of X 1 {\displaystyle X_{1}} . For ξ ∈ R {\displaystyle \xi \in \mathbb {R} } , E [ e ξ S n ] = M ( ξ ) n {\displaystyle E[e^{\xi S_{n}}]=M(\xi )^{n}} . Hence, by Markov's inequality, one has for ξ ≥ 0 {\displaystyle \xi \geq 0} and a ∈ R {\displaystyle a\in \mathbb {R} } P ( S n / n > a ) ≤ e − n ξ a M ( ξ ) n = exp ⁡ [ − n ( ξ a − Λ ( ξ ) ) ] {\displaystyle P(S_{n}/n>a)\leq e^{-n\xi a}M(\xi )^{n}=\exp[-n(\xi a-\Lambda (\xi ))]} where Λ ( ξ ) = log ⁡ M ( ξ ) {\displaystyle \Lambda (\xi )=\log M(\xi )} . Since the left-hand side is independent of ξ {\displaystyle \xi } , we may take the infimum of the right-hand side, which leads one to consider the supremum of ξ a − Λ ( ξ ) {\displaystyle \xi a-\Lambda (\xi )} , i.e., the Legendre transform of Λ {\displaystyle \Lambda } , evaluated at x = a {\displaystyle x=a} . === Microeconomics === Legendre transformation arises naturally in microeconomics in the process of finding the supply S(P) of some product given a fixed price P on the market knowing the cost function C(Q), i.e. the cost for the producer to make/mine/etc. Q units of the given product. A simple theory explains the shape of the supply curve based solely on the cost function. Let us suppose the market price for a one unit of our product is P. For a company selling this good, the best strategy is to adjust the production Q so that its profit is maximized. We can maximize the profit profit = revenue − costs = P Q − C ( Q ) {\displaystyle {\text{profit}}={\text{revenue}}-{\text{costs}}=PQ-C(Q)} by differentiating with respect to Q and solving P − C ′ ( Q opt ) = 0. {\displaystyle P-C'(Q_{\text{opt}})=0.} Qopt represents the optimal quantity Q of goods that the producer is willing to supply, which is indeed the supply itself: S ( P ) = Q opt ( P ) = ( C ′ ) − 1 ( P ) . {\displaystyle S(P)=Q_{\text{opt}}(P)=(C')^{-1}(P).} If we consider the maximal profit as a function of price, profit max ( P ) {\displaystyle {\text{profit}}_{\text{max}}(P)} , we see that it is the Legendre transform of the cost function C ( Q ) {\displaystyle C(Q)} . == Geometric interpretation == For a strictly convex function, the Legendre transformation can be interpreted as a mapping between the graph of the function and the family of tangents of the graph. (For a function of one variable, the tangents are well-defined at all but at most countably many points, since a convex function is differentiable at all but at most countably many points.) The equation of a line with slope p {\displaystyle p} and y {\displaystyle y} -intercept b {\displaystyle b} is given by y = p x + b {\displaystyle y=px+b} . For this line to be tangent to the graph of a function f {\displaystyle f} at the point ( x 0 , f ( x 0 ) ) {\displaystyle \left(x_{0},f(x_{0})\right)} requires f ( x 0 ) = p x 0 + b {\displaystyle f(x_{0})=px_{0}+b} and p = f ′ ( x 0 ) . {\displaystyle p=f'(x_{0}).} Being the derivative of a strictly convex function, the function f ′ {\displaystyle f'} is strictly monotone and thus injective. The second equation can be solved for x 0 = f ′ − 1 ( p ) , {\textstyle x_{0}=f^{\prime -1}(p),} allowing elimination of x 0 {\displaystyle x_{0}} from the first, and solving for the y {\displaystyle y} -intercept b {\displaystyle b} of the tangent as a function of its slope p , {\displaystyle p,} b = f ( x 0 ) − p x 0 = f ( f ′ − 1 ( p ) ) − p ⋅ f ′ − 1 ( p ) = − f ⋆ ( p ) {\textstyle b=f(x_{0})-px_{0}=f\left(f^{\prime -1}(p)\right)-p\cdot f^{\prime -1}(p)=-f^{\star }(p)} where f ⋆ {\displaystyle f^{\star }} denotes the Legendre transform of f . {\displaystyle f.} The family of tangent lines of the graph of f {\displaystyle f} parameterized by the slope p {\displaystyle p} is therefore given by y = p x − f ⋆ ( p ) , {\textstyle y=px-f^{\star }(p),} or, written implicitly, by the solutions of the equation F ( x , y , p ) = y + f ⋆ ( p ) − p x = 0 . {\displaystyle F(x,y,p)=y+f^{\star }(p)-px=0~.} The graph of the original function can be reconstructed from this family of lines as the envelope of this family by demanding ∂ F ( x , y , p ) ∂ p = f ⋆ ′ ( p ) − x = 0. {\displaystyle {\frac {\partial F(x,y,p)}{\partial p}}=f^{\star \prime }(p)-x=0.} Eliminating p {\displaystyle p} from these two equations gives y = x ⋅ f ⋆ ′ − 1 ( x ) − f ⋆ ( f ⋆ ′ − 1 ( x ) ) . {\displaystyle y=x\cdot f^{\star \prime -1}(x)-f^{\star }\left(f^{\star \prime -1}(x)\right).} Identifying y {\displaystyle y} with f ( x ) {\displaystyle f(x)} and recognizing the right side of the preceding equation as the Legendre transform of f ⋆ , {\displaystyle f^{\star },} yield f ( x ) = f ⋆ ⋆ ( x ) . {\textstyle f(x)=f^{\star \star }(x)~.} == Legendre transformation in more than one dimension == For a differentiable real-valued function on an open convex subset U of Rn the Legendre conjugate of the pair (U, f) is defined to be the pair (V, g), where V is the image of U under the gradient mapping Df, and g is the function on V given by the formula g ( y ) = ⟨ y , x ⟩ − f ( x ) , x = ( D f ) − 1 ( y ) {\displaystyle g(y)=\left\langle y,x\right\rangle -f(x),\qquad x=\left(Df\right)^{-1}(y)} where ⟨ u , v ⟩ = ∑ k = 1 n u k ⋅ v k {\displaystyle \left\langle u,v\right\rangle =\sum _{k=1}^{n}u_{k}\cdot v_{k}} is the scalar product on Rn. The multidimensional transform can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. This can be seen as consequence of the following two observations. On the one hand, the hyperplane tangent to the epigraph of f {\displaystyle f} at some point ( x , f ( x ) ) ∈ U × R {\displaystyle (\mathbf {x} ,f(\mathbf {x} ))\in U\times \mathbb {R} } has normal vector ( ∇ f ( x ) , − 1 ) ∈ R n + 1 {\displaystyle (\nabla f(\mathbf {x} ),-1)\in \mathbb {R} ^{n+1}} . On the other hand, any closed convex set C ∈ R m {\displaystyle C\in \mathbb {R} ^{m}} can be characterized via the set of its supporting hyperplanes by the equations x ⋅ n = h C ( n ) {\displaystyle \mathbf {x} \cdot \mathbf {n} =h_{C}(\mathbf {n} )} , where h C ( n ) {\displaystyle h_{C}(\mathbf {n} )} is the support function of C {\displaystyle C} . But the definition of Legendre transform via the maximization matches precisely that of the support function, that is, f ∗ ( x ) = h epi ⁡ ( f ) ( x , − 1 ) {\displaystyle f^{*}(\mathbf {x} )=h_{\operatorname {epi} (f)}(\mathbf {x} ,-1)} . We thus conclude that the Legendre transform characterizes the epigraph in the sense that the tangent plane to the epigraph at any point ( x , f ( x ) ) {\displaystyle (\mathbf {x} ,f(\mathbf {x} ))} is given explicitly by { z ∈ R n + 1 : z ⋅ x = f ∗ ( x ) } . {\displaystyle \{\mathbf {z} \in \mathbb {R} ^{n+1}:\,\,\mathbf {z} \cdot \mathbf {x} =f^{*}(\mathbf {x} )\}.} Alternatively, if X is a vector space and Y is its dual vector space, then for each point x of X and y of Y, there is a natural identification of the cotangent spaces T*Xx with Y and T*Yy with X. If f is a real differentiable function over X, then its exterior derivative, df, is a section of the cotangent bundle T*X and as such, we can construct a map from X to Y. Similarly, if g is a real differentiable function over Y, then dg defines a map from Y to X. If both maps happen to be inverses of each other, we say we have a Legendre transform. The notion of the tautological one-form is commonly used in this setting. When the function is not differentiable, the Legendre transform can still be extended, and is known as the Legendre-Fenchel transformation. In this more general setting, a few properties are lost: for example, the Legendre transform is no longer its own inverse (unless there are extra assumptions, like convexity). == Legendre transformation on manifolds == Let M {\textstyle M} be a smooth manifold, let E {\displaystyle E} and π : E → M {\textstyle \pi :E\to M} be a vector bundle on M {\displaystyle M} and its associated bundle projection, respectively. Let L : E → R {\textstyle L:E\to \mathbb {R} } be a smooth function. We think of L {\textstyle L} as a Lagrangian by analogy with the classical case where M = R {\textstyle M=\mathbb {R} } , E = T M = R × R {\textstyle E=TM=\mathbb {R} \times \mathbb {R} } and L ( x , v ) = 1 2 m v 2 − V ( x ) {\textstyle L(x,v)={\frac {1}{2}}mv^{2}-V(x)} for some positive number m ∈ R {\textstyle m\in \mathbb {R} } and function V : M → R {\textstyle V:M\to \mathbb {R} } . As usual, the dual of E {\textstyle E} is denote by E ∗ {\textstyle E^{*}} . The fiber of π {\textstyle \pi } over x ∈ M {\textstyle x\in M} is denoted E x {\textstyle E_{x}} , and the restriction of L {\textstyle L} to E x {\textstyle E_{x}} is denoted by L | E x : E x → R {\textstyle L|_{E_{x}}:E_{x}\to \mathbb {R} } . The Legendre transformation of L {\textstyle L} is the smooth morphism F L : E → E ∗ {\displaystyle \mathbf {F} L:E\to E^{*}} defined by F L ( v ) = d ( L | E x ) v ∈ E x ∗ {\textstyle \mathbf {F} L(v)=d(L|_{E_{x}})_{v}\in E_{x}^{*}} , where x = π ( v ) {\textstyle x=\pi (v)} . Here we use the fact that since E x {\textstyle E_{x}} is a vector space, T v ( E x ) {\textstyle T_{v}(E_{x})} can be identified with E x {\textstyle E_{x}} . In other words, F L ( v ) ∈ E x ∗ {\textstyle \mathbf {F} L(v)\in E_{x}^{*}} is the covector that sends w ∈ E x {\textstyle w\in E_{x}} to the directional derivative d d t | t = 0 L ( v + t w ) ∈ R {\textstyle \left.{\frac {d}{dt}}\right|_{t=0}L(v+tw)\in \mathbb {R} } . To describe the Legendre transformation locally, let U ⊆ M {\textstyle U\subseteq M} be a coordinate chart over which E {\textstyle E} is trivial. Picking a trivialization of E {\textstyle E} over U {\textstyle U} , we obtain charts E U ≅ U × R r {\textstyle E_{U}\cong U\times \mathbb {R} ^{r}} and E U ∗ ≅ U × R r {\textstyle E_{U}^{*}\cong U\times \mathbb {R} ^{r}} . In terms of these charts, we have F L ( x ; v 1 , … , v r ) = ( x ; p 1 , … , p r ) {\textstyle \mathbf {F} L(x;v_{1},\dotsc ,v_{r})=(x;p_{1},\dotsc ,p_{r})} , where p i = ∂ L ∂ v i ( x ; v 1 , … , v r ) {\displaystyle p_{i}={\frac {\partial L}{\partial v_{i}}}(x;v_{1},\dotsc ,v_{r})} for all i = 1 , … , r {\textstyle i=1,\dots ,r} . If, as in the classical case, the restriction of L : E → R {\textstyle L:E\to \mathbb {R} } to each fiber E x {\textstyle E_{x}} is strictly convex and bounded below by a positive definite quadratic form minus a constant, then the Legendre transform F L : E → E ∗ {\textstyle \mathbf {F} L:E\to E^{*}} is a diffeomorphism. Suppose that F L {\textstyle \mathbf {F} L} is a diffeomorphism and let H : E ∗ → R {\textstyle H:E^{*}\to \mathbb {R} } be the "Hamiltonian" function defined by H ( p ) = p ⋅ v − L ( v ) , {\displaystyle H(p)=p\cdot v-L(v),} where v = ( F L ) − 1 ( p ) {\textstyle v=(\mathbf {F} L)^{-1}(p)} . Using the natural isomorphism E ≅ E ∗ ∗ {\textstyle E\cong E^{**}} , we may view the Legendre transformation of H {\textstyle H} as a map F H : E ∗ → E {\textstyle \mathbf {F} H:E^{*}\to E} . Then we have ( F L ) − 1 = F H . {\displaystyle (\mathbf {F} L)^{-1}=\mathbf {F} H.} == Further properties == === Scaling properties === The Legendre transformation has the following scaling properties: For a > 0, f ( x ) = a ⋅ g ( x ) ⇒ f ⋆ ( p ) = a ⋅ g ⋆ ( p a ) {\displaystyle f(x)=a\cdot g(x)\Rightarrow f^{\star }(p)=a\cdot g^{\star }\left({\frac {p}{a}}\right)} f ( x ) = g ( a ⋅ x ) ⇒ f ⋆ ( p ) = g ⋆ ( p a ) . {\displaystyle f(x)=g(a\cdot x)\Rightarrow f^{\star }(p)=g^{\star }\left({\frac {p}{a}}\right).} It follows that if a function is homogeneous of degree r then its image under the Legendre transformation is a homogeneous function of degree s, where 1/r + 1/s = 1. (Since f(x) = xr/r, with r > 1, implies f*(p) = ps/s.) Thus, the only monomial whose degree is invariant under Legendre transform is the quadratic. === Behavior under translation === f ( x ) = g ( x ) + b ⇒ f ⋆ ( p ) = g ⋆ ( p ) − b {\displaystyle f(x)=g(x)+b\Rightarrow f^{\star }(p)=g^{\star }(p)-b} f ( x ) = g ( x + y ) ⇒ f ⋆ ( p ) = g ⋆ ( p ) − p ⋅ y {\displaystyle f(x)=g(x+y)\Rightarrow f^{\star }(p)=g^{\star }(p)-p\cdot y} === Behavior under inversion === f ( x ) = g − 1 ( x ) ⇒ f ⋆ ( p ) = − p ⋅ g ⋆ ( 1 p ) {\displaystyle f(x)=g^{-1}(x)\Rightarrow f^{\star }(p)=-p\cdot g^{\star }\left({\frac {1}{p}}\right)} === Behavior under linear transformations === Let A : Rn → Rm be a linear transformation. For any convex function f on Rn, one has ( A f ) ⋆ = f ⋆ A ⋆ {\displaystyle (Af)^{\star }=f^{\star }A^{\star }} where A* is the adjoint operator of A defined by ⟨ A x , y ⋆ ⟩ = ⟨ x , A ⋆ y ⋆ ⟩ , {\displaystyle \left\langle Ax,y^{\star }\right\rangle =\left\langle x,A^{\star }y^{\star }\right\rangle ,} and Af is the push-forward of f along A ( A f ) ( y ) = inf { f ( x ) : x ∈ X , A x = y } . {\displaystyle (Af)(y)=\inf\{f(x):x\in X,Ax=y\}.} A closed convex function f is symmetric with respect to a given set G of orthogonal linear transformations, f ( A x ) = f ( x ) , ∀ x , ∀ A ∈ G {\displaystyle f(Ax)=f(x),\;\forall x,\;\forall A\in G} if and only if f* is symmetric with respect to G. === Infimal convolution === The infimal convolution of two functions f and g is defined as ( f ⋆ inf g ) ( x ) = inf { f ( x − y ) + g ( y ) | y ∈ R n } . {\displaystyle \left(f\star _{\inf }g\right)(x)=\inf \left\{f(x-y)+g(y)\,|\,y\in \mathbf {R} ^{n}\right\}.} Let f1, ..., fm be proper convex functions on Rn. Then ( f 1 ⋆ inf ⋯ ⋆ inf f m ) ⋆ = f 1 ⋆ + ⋯ + f m ⋆ . {\displaystyle \left(f_{1}\star _{\inf }\cdots \star _{\inf }f_{m}\right)^{\star }=f_{1}^{\star }+\cdots +f_{m}^{\star }.} === Fenchel's inequality === For any function f and its convex conjugate f * Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every x ∈ X and p ∈ X*, i.e., independent x, p pairs, ⟨ p , x ⟩ ≤ f ( x ) + f ⋆ ( p ) . {\displaystyle \left\langle p,x\right\rangle \leq f(x)+f^{\star }(p).} == See also == Dual curve Projective duality Young's inequality for products Convex conjugate Moreau's theorem Integration by parts Fenchel's duality theorem == References == Courant, Richard; Hilbert, David (2008). Methods of Mathematical Physics. Vol. 2. John Wiley & Sons. ISBN 978-0471504399. Arnol'd, Vladimir Igorevich (1989). Mathematical Methods of Classical Mechanics (2nd ed.). Springer. ISBN 0-387-96890-3. Fenchel, W. (1949). "On conjugate convex functions", Can. J. Math 1: 73-77. Rockafellar, R. Tyrrell (1996) [1970]. Convex Analysis. Princeton University Press. ISBN 0-691-01586-4. Zia, R. K. P.; Redish, E. F.; McKay, S. R. (2009). "Making sense of the Legendre transform". American Journal of Physics. 77 (7): 614. arXiv:0806.1147. Bibcode:2009AmJPh..77..614Z. doi:10.1119/1.3119512. S2CID 37549350. == Further reading == Nielsen, Frank (2010-09-01). "Legendre transformation and information geometry" (PDF). Retrieved 2016-01-24. Touchette, Hugo (2005-07-27). "Legendre-Fenchel transforms in a nutshell" (PDF). Retrieved 2016-01-24. Touchette, Hugo (2006-11-21). "Elements of convex analysis" (PDF). Archived from the original (PDF) on 2016-02-01. Retrieved 2016-01-24. == External links == Legendre transform with figures at maze5.net Legendre and Legendre-Fenchel transforms in a step-by-step explanation at onmyphd.com
Wikipedia/Legendre_transform
Classical Mechanics is a textbook written by Herbert Goldstein, a professor at Columbia University. Intended for advanced undergraduate and beginning graduate students, it has been one of the standard references on its subject around the world since its first publication in 1950. == Overview == In the second edition, Goldstein corrected all the errors that had been pointed out, added a new chapter on perturbation theory, a new section on Bertrand's theorem, and another on Noether's theorem. Other arguments and proofs were simplified and supplemented. Before the death of its primary author in 2005, a new (third) edition of the book was released, with the collaboration of Charles P. Poole and John L. Safko from the University of South Carolina. In the third edition, the book discusses at length various mathematically sophisticated reformations of Newtonian mechanics, namely analytical mechanics, as applied to particles, rigid bodies and continua. In addition, it covers in some detail classical electromagnetism, special relativity, and field theory, both classical and relativistic. There is an appendix on group theory. New to the third edition include a chapter on nonlinear dynamics and chaos, a section on the exact solutions to the three-body problem obtained by Euler and Lagrange, and a discussion of the damped driven pendulum that explains the Josephson junctions. This is counterbalanced by the reduction of several existing chapters motivated by the desire to prevent this edition from exceeding the previous one in length. For example, the discussions of Hermitian and unitary matrices were omitted because they are more relevant to quantum mechanics rather than classical mechanics, while those of Routh's procedure and time-independent perturbation theory were reduced. == Table of Contents (3rd Edition) == == Editions == Goldstein, Herbert (1950). Classical Mechanics (1st ed.). Addison-Wesley. Goldstein, Herbert (1951). Classical Mechanics (1st ed.). Addison-Wesley. ASIN B000OL8LOM. Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 978-0-201-02918-5. Goldstein, Herbert; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (3rd ed.). Addison-Wesley. ISBN 978-0-201-65702-9. == Reception == === First edition === S.L. Quimby of Columbia University noted that the first half of the first edition of the book is dedicated to the development of Lagrangian mechanics with the treatment of velocity-dependent potentials, which are important in electromagnetism, and the use of the Cayley-Klein parameters and matrix algebra for rigid-body dynamics. This is followed by a comprehensive and clear discussion of Hamiltonian mechanics. End-of-chapter references improve the value of the book. Quimby pointed out that although this book is suitable for students preparing for quantum mechanics, it is not helpful for those interested in analytical mechanics because its treatment omits too much. Quimby praised the quality of printing and binding which make the book attractive. In the Journal of the Franklin Institute, Rupen Eskergian noted that the first edition of Classical Mechanics offers a mature take on the subject using vector and tensor notations and with a welcome emphasis on variational methods. This book begins with a review of elementary concepts, then introduces the principle of virtual work, constraints, generalized coordinates, and Lagrangian mechanics. Scattering is treated in the same chapter as central forces and the two-body problem. Unlike most other books on mechanics, this one elaborates upon the virial theorem. The discussion of canonical and contact transformations, the Hamilton-Jacobi theory, and action-angle coordinates is followed by a presentation of geometric optics and wave mechanics. Eskergian believed this book serves as a bridge to modern physics. Writing for The Mathematical Gazette on the first edition, L. Rosenhead congratulated Goldstein for a lucid account of classical mechanics leading to modern theoretical physics, which he believed would stand the test of time alongside acknowledged classics such as E.T. Whittaker's Analytical Dynamics and Arnold Sommerfeld's Lectures on Theoretical Physics. This book is self-contained and is suitable for students who have completed courses in mathematics and physics of the first two years of university. End-of-chapter references with comments and some example problems enhance the book. Rosenhead also liked the diagrams, index, and printing. Concerning the second printing of the first edition, Vic Twersky of the Mathematical Research Group at New York University considered the book to be of pedagogical merit because it explains things in a clear and simple manner, and its humor is not forced. Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. It also has a chapter on the mechanics of fields and continua. At the end of each chapter, there is a list of references with the author's candid reviews of each. Twersky said that Goldstein's Classical Mechanics is more suitable for physicists compared to the much older treatise Analytical Dynamics by E.T. Whittaker, which he deemed more appropriate for mathematicians. E. W. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Mathematical tools are introduced as needed. He believed that the annotated references at the end of each chapter are of great value. === Third edition === Stephen R. Addison from the University of Central Arkansas commented that while the first edition of Classical Mechanics was essentially a treatise with exercises, the third has become less scholarly and more of a textbook. This book is most useful for students who are interested in learning the necessary material in preparation for quantum mechanics. The presentation of most materials in the third edition remain unchanged compared to that of the second, though many of the old references and footnotes were removed. Sections on the relations between the action-angle coordinates and the Hamilton-Jacobi equation with the old quantum theory, wave mechanics, and geometric optics were removed. Chapter 7, which deals with special relativity, has been heavily revised and could prove to be more useful to students who want to study general relativity than its equivalent in previous editions. Chapter 11 provides a clear, if somewhat dated, survey of classical chaos. Appendix B could help advanced students refresh their memories but may be too short to learn from. In all, Addison believed that this book remains a classic text on the eighteenth- and nineteenth-century approaches to theoretical mechanics; those interested in a more modern approach – expressed in the language of differential geometry and Lie groups – should refer to Mathematical Methods of Classical Mechanics by Vladimir Arnold. Martin Tiersten from the City University of New York pointed out a serious error in the book that persisted in all three editions and even got promoted to the front cover of the book. Such a closed orbit, depicted in a diagram on page 80 (as Figure 3.7) is impossible for an attractive central force because the path cannot be concave away from the center of force. A similarly erroneous diagram appears on page 91 (as Figure 3.13). Tiersten suggested that the reason why this error remained unnoticed for so long is because advanced mechanics texts typically do not use vectors in their treatment of central-force problems, in particular the tangential and normal components of the acceleration vector. He wrote, "Because an attractive force is always directed in toward the center of force, the direction toward the center of curvature at the turning points must be toward the center of force." In response, Poole and Safko acknowledged the error and stated they were working on a list of errata. == See also == Newtonian mechanics Classical Mechanics (Kibble and Berkshire) Course of Theoretical Physics (Landau and Lifshitz) List of textbooks on classical and quantum mechanics Introduction to Electrodynamics (Griffiths) Classical Electrodynamics (Jackson) == References == == External links == Errata, corrections, and comments on the third edition. John L. Safko and Charles P. Poole. University of South Carolina.
Wikipedia/Classical_Mechanics_(Goldstein_book)
In electromagnetism, the Lorentz force is the force exerted on a charged particle by electric and magnetic fields. It is the fundamental force that governs the motion of charged particles in electromagnetic fields and underlies many physical phenomena, from the operation of electric motors and particle accelerators to the behavior of plasmas. The force has two components. The electric force acts in the direction of the electric field for positive charges and opposite to it for negative charges, tending to accelerate the particle in a straight line. The magnetic force is perpendicular to both the particle's velocity and the magnetic field, and it causes the particle to move along a curved trajectory, often circular or helical in form, depending on the directions of the fields. Variations on the force law describe the magnetic force on a current-carrying wire (sometimes called Laplace force), and the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction). Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force. == Definition == === Charged particle === The force F acting on a particle of electric charge q with instantaneous velocity v, due to an external electric field E and magnetic field B, is given by (SI definition of quantities): where × is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have: F x = q ( E x + v y B z − v z B y ) , F y = q ( E y + v z B x − v x B z ) , F z = q ( E z + v x B y − v y B x ) . {\displaystyle {\begin{aligned}F_{x}&=q\left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right),\\[0.5ex]F_{y}&=q\left(E_{y}+v_{z}B_{x}-v_{x}B_{z}\right),\\[0.5ex]F_{z}&=q\left(E_{z}+v_{x}B_{y}-v_{y}B_{x}\right).\end{aligned}}} In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as: F ( r ( t ) , r ˙ ( t ) , t , q ) = q [ E ( r , t ) + r ˙ ( t ) × B ( r , t ) ] {\displaystyle \mathbf {F} \left(\mathbf {r} (t),{\dot {\mathbf {r} }}(t),t,q\right)=q\left[\mathbf {E} (\mathbf {r} ,t)+{\dot {\mathbf {r} }}(t)\times \mathbf {B} (\mathbf {r} ,t)\right]} in which r is the position vector of the charged particle, t is time, and the overdot is a time derivative. A positively charged particle will be accelerated in the same linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of v and are then curled to point in the direction of B, then the extended thumb will point in the direction of F). The term qE is called the electric force, while the term q(v × B) is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: in what follows, the term Lorentz force will refer to the expression for the total force. The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force. The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is v ⋅ F = q v ⋅ E . {\displaystyle \mathbf {v} \cdot \mathbf {F} =q\,\mathbf {v} \cdot \mathbf {E} .} Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle and does no work. === Continuous charge distribution === For a continuous charge distribution in motion, the Lorentz force equation becomes: d F = d q ( E + v × B ) {\displaystyle \mathrm {d} \mathbf {F} =\mathrm {d} q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where d F {\displaystyle \mathrm {d} \mathbf {F} } is the force on a small piece of the charge distribution with charge d q {\displaystyle \mathrm {d} q} . If both sides of this equation are divided by the volume of this small piece of the charge distribution d V {\displaystyle \mathrm {d} V} , the result is: f = ρ ( E + v × B ) {\displaystyle \mathbf {f} =\rho \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where f {\displaystyle \mathbf {f} } is the force density (force per unit volume) and ρ {\displaystyle \rho } is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is J = ρ v {\displaystyle \mathbf {J} =\rho \mathbf {v} } so the continuous analogue to the equation is The total force is the volume integral over the charge distribution: F = ∫ ( ρ E + J × B ) d V . {\displaystyle \mathbf {F} =\int \left(\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} \right)\mathrm {d} V.} By eliminating ρ {\displaystyle \rho } and J {\displaystyle \mathbf {J} } , using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} , in turn this can be combined with the Poynting vector S {\displaystyle \mathbf {S} } to obtain the electromagnetic stress–energy tensor T used in general relativity. In terms of σ {\displaystyle {\boldsymbol {\sigma }}} and S {\displaystyle \mathbf {S} } , another way to write the Lorentz force (per unit volume) is f = ∇ ⋅ σ − 1 c 2 ∂ S ∂ t {\displaystyle \mathbf {f} =\nabla \cdot {\boldsymbol {\sigma }}-{\dfrac {1}{c^{2}}}{\dfrac {\partial \mathbf {S} }{\partial t}}} where ∇ ⋅ {\displaystyle \nabla \cdot } denotes the divergence of the tensor field and c {\displaystyle c} is the speed of light. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details. The density of power associated with the Lorentz force in a material medium is J ⋅ E . {\displaystyle \mathbf {J} \cdot \mathbf {E} .} If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is f = ( ρ f − ∇ ⋅ P ) E + ( J f + ∇ × M + ∂ P ∂ t ) × B . {\displaystyle \mathbf {f} =\left(\rho _{f}-\nabla \cdot \mathbf {P} \right)\mathbf {E} +\left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\times \mathbf {B} .} where: ρ f {\displaystyle \rho _{f}} is the density of free charge; P {\displaystyle \mathbf {P} } is the polarization density; J f {\displaystyle \mathbf {J} _{f}} is the density of free current; and M {\displaystyle \mathbf {M} } is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is ( J f + ∇ × M + ∂ P ∂ t ) ⋅ E . {\displaystyle \left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\cdot \mathbf {E} .} === Formulation in the Gaussian system === The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead F = q G ( E G + v c × B G ) , {\displaystyle \mathbf {F} =q_{\mathrm {G} }\left(\mathbf {E} _{\mathrm {G} }+{\frac {\mathbf {v} }{c}}\times \mathbf {B} _{\mathrm {G} }\right),} where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations: q G = q S I 4 π ε 0 , E G = 4 π ε 0 E S I , B G = 4 π / μ 0 B S I , c = 1 ε 0 μ 0 . {\displaystyle q_{\mathrm {G} }={\frac {q_{\mathrm {SI} }}{\sqrt {4\pi \varepsilon _{0}}}},\quad \mathbf {E} _{\mathrm {G} }={\sqrt {4\pi \varepsilon _{0}}}\,\mathbf {E} _{\mathrm {SI} },\quad \mathbf {B} _{\mathrm {G} }={\sqrt {4\pi /\mu _{0}}}\,{\mathbf {B} _{\mathrm {SI} }},\quad c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}.} where ε0 is the vacuum permittivity and μ0 the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context. == History == Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields. The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as F = q 2 v × B . {\displaystyle \mathbf {F} ={\frac {q}{2}}\mathbf {v} \times \mathbf {B} .} Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name. == Lorentz force law as the definition of E and B == In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields E and B. To be specific, the Lorentz force is understood to be the following empirical statement: The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form: F = q ( E + v × B ) {\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )} This is valid, even for particles approaching the speed of light (that is, magnitude of v, |v| ≈ c). So the two vector fields E and B are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force. == Trajectories of particles due to the Lorentz force == In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation. == Significance of the Lorentz force == While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another. In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory). == Force on a current-carrying wire == When a wire carrying an electric current is placed in an external magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field: F = I ℓ × B , {\displaystyle \mathbf {F} =I{\boldsymbol {\ell }}\times \mathbf {B} ,} where ℓ is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current I. If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire d ℓ {\displaystyle \mathrm {d} {\boldsymbol {\ell }}} , then adding up all these forces by integration. This results in the same formal expression, but ℓ should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque. If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current I is given by integration along the wire, F = I ∫ ( d ℓ × B ) . {\displaystyle \mathbf {F} =I\int (\mathrm {d} {\boldsymbol {\ell }}\times \mathbf {B} ).} One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's generated magnetic field. Another application is an induction motor. The stator winding AC current generates a moving magnetic field which induces a current in the rotor. The subsequent Lorentz force F {\displaystyle \mathbf {F} } acting on the rotor creates a torque, making the motor spin. Hence, though the Lorentz force law does not apply when the magnetic field B {\displaystyle \mathbf {B} } is generated by the current I {\displaystyle I} , it does apply when the current I {\displaystyle I} is induced by the movement of magnetic field B {\displaystyle \mathbf {B} } . == Electromotive force == The magnetic force (qv × B) component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire. In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (qE) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF called the transformer EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations). Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa. == Lorentz force and Faraday's law of induction == Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is: E = − d Φ B d t {\displaystyle {\mathcal {E}}=-{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}} where Φ B = ∫ Σ ( t ) B ( r , t ) ⋅ d A , {\displaystyle \Phi _{B}=\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} ,} is the magnetic flux through the loop, B is the magnetic field, Σ(t) is a surface bounded by the closed contour ∂Σ(t), at time t, dA is an infinitesimal vector area element of Σ(t) (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch). The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wire – but also for a moving wire. From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law. Let ∂Σ(t) be the moving wire, moving together without rotation and with constant velocity v and Σ(t) be the internal surface of the wire. The EMF around the closed path ∂Σ(t) is given by: E = ∮ ∂ Σ ( t ) F q ⋅ d ℓ {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}{\frac {\mathbf {F} }{q}}\cdot \mathrm {d} {\boldsymbol {\ell }}} where E ′ ( r , t ) = F / q ( r , t ) {\displaystyle \mathbf {E} '(\mathbf {r} ,t)=\mathbf {F} /q(\mathbf {r} ,t)} is the electric field and dℓ is an infinitesimal vector element of the contour ∂Σ(t). Equating both integrals leads to the field theory form of Faraday's law, given by: E = ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − d d t ∫ Σ ( t ) B ( r , t ) ⋅ d A . {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} .} This result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called the (integral form of) Maxwell–Faraday equation: ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} .} The two equations are equivalent if the wire is not moving. In case the circuit is moving with a velocity v {\displaystyle \mathbf {v} } in some direction, then, using the Leibniz integral rule and that div B = 0, gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) ⋅ d ℓ . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} +\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\cdot \mathrm {d} {\boldsymbol {\ell }}.} Substituting the Maxwell-Faraday equation then gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) d ℓ {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=\oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}+\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\mathrm {d} {\boldsymbol {\ell }}} since this is valid for any wire position it implies that F = q E ( r , t ) + q v × B ( r , t ) . {\displaystyle \mathbf {F} =q\,\mathbf {E} (\mathbf {r} ,\,t)+q\,\mathbf {v} \times \mathbf {B} (\mathbf {r} ,\,t).} Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law. If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux ΦB linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, ΦB will change. Alternatively, if the loop changes orientation with respect to the B-field, the B ⋅ dA differential element will change because of the different angle between B and dA, also changing ΦB. As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface ∂Σ(t) time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in ΦB. Note that the Maxwell Faraday's equation implies that the Electric Field E is non conservative when the Magnetic Field B varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero. == Lorentz force in terms of potentials == The E and B fields can be replaced by the magnetic vector potential A and (scalar) electrostatic potential ϕ by E = − ∇ ϕ − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}\\[1ex]\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}} where ∇ is the gradient, ∇⋅ is the divergence, and ∇× is the curl. The force becomes F = q [ − ∇ ϕ − ∂ A ∂ t + v × ( ∇ × A ) ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} )\right].} Using an identity for the triple product this can be rewritten as F = q [ − ∇ ϕ − ∂ A ∂ t + ∇ ( v ⋅ A ) − ( v ⋅ ∇ ) A ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\nabla \left(\mathbf {v} \cdot \mathbf {A} \right)-\left(\mathbf {v} \cdot \nabla \right)\mathbf {A} \right].} (Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on A {\displaystyle \mathbf {A} } , not on v {\displaystyle \mathbf {v} } ; thus, there is no need of using Feynman's subscript notation in the equation above.) Using the chain rule, the convective derivative of A {\displaystyle \mathbf {A} } is: d A d t = ∂ A ∂ t + ( v ⋅ ∇ ) A {\displaystyle {\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {A} } so that the above expression becomes: F = q [ − ∇ ( ϕ − v ⋅ A ) − d A d t ] . {\displaystyle \mathbf {F} =q\left[-\nabla (\phi -\mathbf {v} \cdot \mathbf {A} )-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\right].} With v = ẋ and d d t [ ∂ ∂ x ˙ ( ϕ − x ˙ ⋅ A ) ] = − d A d t , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\partial }{\partial {\dot {\mathbf {x} }}}}\left(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} \right)\right]=-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}},} we can put the equation into the convenient Euler–Lagrange form where ∇ x = x ^ ∂ ∂ x + y ^ ∂ ∂ y + z ^ ∂ ∂ z {\displaystyle \nabla _{\mathbf {x} }={\hat {x}}{\dfrac {\partial }{\partial x}}+{\hat {y}}{\dfrac {\partial }{\partial y}}+{\hat {z}}{\dfrac {\partial }{\partial z}}} and ∇ x ˙ = x ^ ∂ ∂ x ˙ + y ^ ∂ ∂ y ˙ + z ^ ∂ ∂ z ˙ . {\displaystyle \nabla _{\dot {\mathbf {x} }}={\hat {x}}{\dfrac {\partial }{\partial {\dot {x}}}}+{\hat {y}}{\dfrac {\partial }{\partial {\dot {y}}}}+{\hat {z}}{\dfrac {\partial }{\partial {\dot {z}}}}.} == Lorentz force and analytical mechanics == The Lagrangian for a charged particle of mass m and charge q in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by: L = m 2 r ˙ ⋅ r ˙ + q A ⋅ r ˙ − q ϕ {\displaystyle L={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot \mathbf {\dot {r}} -q\phi } where A and ϕ are the potential fields as above. The quantity V = q ( ϕ − A ⋅ r ˙ ) {\displaystyle V=q(\phi -\mathbf {A} \cdot \mathbf {\dot {r}} )} can be identified as a generalized, velocity-dependent potential energy and, accordingly, F {\displaystyle \mathbf {F} } as a non-conservative force. Using the Lagrangian, the equation for the Lorentz force given above can be obtained again. The relativistic Lagrangian is L = − m c 2 1 − ( r ˙ c ) 2 + q A ( r ) ⋅ r ˙ − q ϕ ( r ) {\displaystyle L=-mc^{2}{\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}+q\mathbf {A} (\mathbf {r} )\cdot {\dot {\mathbf {r} }}-q\phi (\mathbf {r} )} The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential. == Relativistic form of the Lorentz force == === Covariant form of the Lorentz force === ==== Field tensor ==== Using the metric signature (1, −1, −1, −1), the Lorentz force for a charge q can be written in covariant form: where pα is the four-momentum, defined as p α = ( p 0 , p 1 , p 2 , p 3 ) = ( γ m c , p x , p y , p z ) , {\displaystyle p^{\alpha }=\left(p_{0},p_{1},p_{2},p_{3}\right)=\left(\gamma mc,p_{x},p_{y},p_{z}\right),} τ the proper time of the particle, Fαβ the contravariant electromagnetic tensor F α β = ( 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ) {\displaystyle F^{\alpha \beta }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}} and U is the covariant 4-velocity of the particle, defined as: U β = ( U 0 , U 1 , U 2 , U 3 ) = γ ( c , − v x , − v y , − v z ) , {\displaystyle U_{\beta }=\left(U_{0},U_{1},U_{2},U_{3}\right)=\gamma \left(c,-v_{x},-v_{y},-v_{z}\right),} in which γ ( v ) = 1 1 − v 2 c 2 = 1 1 − v x 2 + v y 2 + v z 2 c 2 {\displaystyle \gamma (v)={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-{\frac {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}{c^{2}}}}}}} is the Lorentz factor. The fields are transformed to a frame moving with constant relative velocity by: F ′ μ ν = Λ μ α Λ ν β F α β , {\displaystyle F'^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta }\,,} where Λμα is the Lorentz transformation tensor. ==== Translation to vector notation ==== The α = 1 component (x-component) of the force is d p 1 d τ = q U β F 1 β = q ( U 0 F 10 + U 1 F 11 + U 2 F 12 + U 3 F 13 ) . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=qU_{\beta }F^{1\beta }=q\left(U_{0}F^{10}+U_{1}F^{11}+U_{2}F^{12}+U_{3}F^{13}\right).} Substituting the components of the covariant electromagnetic tensor F yields d p 1 d τ = q [ U 0 ( E x c ) + U 2 ( − B z ) + U 3 ( B y ) ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\left[U_{0}\left({\frac {E_{x}}{c}}\right)+U_{2}(-B_{z})+U_{3}(B_{y})\right].} Using the components of covariant four-velocity yields d p 1 d τ = q γ [ c ( E x c ) + ( − v y ) ( − B z ) + ( − v z ) ( B y ) ] = q γ ( E x + v y B z − v z B y ) = q γ [ E x + ( v × B ) x ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\gamma \left[c\left({\frac {E_{x}}{c}}\right)+(-v_{y})(-B_{z})+(-v_{z})(B_{y})\right]=q\gamma \left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right)=q\gamma \left[E_{x}+\left(\mathbf {v} \times \mathbf {B} \right)_{x}\right]\,.} The calculation for α = 2, 3 (force components in the y and z directions) yields similar results, so collecting the three equations into one: d p d τ = q γ ( E + v × B ) , {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} \tau }}=q\gamma \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} and since differentials in coordinate time dt and proper time dτ are related by the Lorentz factor, d t = γ ( v ) d τ , {\displaystyle dt=\gamma (v)\,d\tau ,} so we arrive at d p d t = q ( E + v × B ) . {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}=q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right).} This is precisely the Lorentz force law, however, it is important to note that p is the relativistic expression, p = γ ( v ) m 0 v . {\displaystyle \mathbf {p} =\gamma (v)m_{0}\mathbf {v} \,.} === Lorentz force in spacetime algebra (STA) === The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields F {\displaystyle {\mathcal {F}}} , and an arbitrary time-direction, γ 0 {\displaystyle \gamma _{0}} . This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space, as E = ( F ⋅ γ 0 ) γ 0 {\displaystyle \mathbf {E} =\left({\mathcal {F}}\cdot \gamma _{0}\right)\gamma _{0}} and i B = ( F ∧ γ 0 ) γ 0 {\displaystyle i\mathbf {B} =\left({\mathcal {F}}\wedge \gamma _{0}\right)\gamma _{0}} F {\displaystyle {\mathcal {F}}} is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector γ 0 {\displaystyle \gamma _{0}} pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector. The relativistic velocity is given by the (time-like) changes in a time-position vector v = x ˙ {\displaystyle v={\dot {x}}} , where v 2 = 1 , {\displaystyle v^{2}=1,} (which shows our choice for the metric) and the velocity is v = c v ∧ γ 0 / ( v ⋅ γ 0 ) . {\displaystyle \mathbf {v} =cv\wedge \gamma _{0}/(v\cdot \gamma _{0}).} The proper form of the Lorentz force law ('invariant' is an inadequate term because no transformation has been defined) is simply Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression. === Lorentz force in general relativity === In the general theory of relativity the equation of motion for a particle with mass m {\displaystyle m} and charge e {\displaystyle e} , moving in a space with metric tensor g a b {\displaystyle g_{ab}} and electromagnetic field F a b {\displaystyle F_{ab}} , is given as m d u c d s − m 1 2 g a b , c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m{\frac {1}{2}}g_{ab,c}u^{a}u^{b}=eF_{cb}u^{b},} where u a = d x a / d s {\displaystyle u^{a}=dx^{a}/ds} ( d x a {\displaystyle dx^{a}} is taken along the trajectory), g a b , c = ∂ g a b / ∂ x c {\displaystyle g_{ab,c}=\partial g_{ab}/\partial x^{c}} , and d s 2 = g a b d x a d x b {\displaystyle ds^{2}=g_{ab}dx^{a}dx^{b}} . The equation can also be written as m d u c d s − m Γ a b c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m\Gamma _{abc}u^{a}u^{b}=eF_{cb}u^{b},} where Γ a b c {\displaystyle \Gamma _{abc}} is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as m D u c d s = e F c b u b , {\displaystyle m{\frac {Du_{c}}{ds}}=eF_{cb}u^{b},} where D {\displaystyle D} is the covariant differential in general relativity. == Applications == The Lorentz force occurs in many devices, including: Cyclotrons and other circular path particle accelerators Mass spectrometers Velocity filters Magnetrons Lorentz force velocimetry In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices, including: Electric motors Railguns Linear motors Loudspeakers Magnetoplasmadynamic thrusters Electrical generators Homopolar generators Linear alternators == See also == == Notes == === Remarks === === Citations === == References == Darrigol, Olivier (2000). Electrodynamics from Ampère to Einstein. Oxford ; New York: Clarendon Press. ISBN 0-19-850594-9. Feynman, Richard Phillips; Leighton, Robert B.; Sands, Matthew L. (2006). The Feynman lectures on physics. Vol. 2. Pearson / Addison-Wesley. ISBN 0-8053-9047-2. Griffiths, David J. (2023). Introduction to Electrodynamics. Cambridge University Press. doi:10.1017/9781009397735. ISBN 978-1-009-39773-5. Jackson, John David (1998). Classical Electrodynamics. New York: John Wiley & Sons. ISBN 978-0-471-30932-1. Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism:. Cambridge University Press. doi:10.1017/cbo9781139012973. ISBN 978-1-139-01297-3. Sadiku, Matthew N. O. (2018). Elements of electromagnetics (7th ed.). New York/Oxford: Oxford University Press. ISBN 978-0-19-069861-4. Serway, Raymond A.; Jewett, John W. Jr. (2004). Physics for scientists and engineers, with modern physics. Belmont, California: Thomson Brooks/Cole. ISBN 0-534-40846-X. Srednicki, Mark A. (2007). Quantum field theory. Cambridge, England; New York City: Cambridge University Press. ISBN 978-0-521-86449-7. == External links == Lorentz force (demonstration) Interactive Java applet on the magnetic deflection of a particle beam in a homogeneous magnetic field Archived 2011-08-13 at the Wayback Machine by Wolfgang Bauer
Wikipedia/Lorentz_force
In numerical analysis, the ITP method (Interpolate Truncate and Project method) is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies (Brent's Method, Ridders, Illinois), since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail. The ITP method follows the same structure of standard bracketing strategies that keeps track of upper and lower bounds for the location of the root; but it also keeps track of the region where worst-case performance is kept upper-bounded. As a bracketing strategy, in each iteration the ITP queries the value of the function on one point and discards the part of the interval between two points where the function value shares the same sign. The queried point is calculated with three steps: it interpolates finding the regula falsi estimate, then it perturbs/truncates the estimate (similar to Regula falsi § Improvements in regula falsi) and then projects the perturbed estimate onto an interval in the neighbourhood of the bisection midpoint. The neighbourhood around the bisection point is calculated in each iteration in order to guarantee minmax optimality (Theorem 2.1 of ). The method depends on three hyper-parameters κ 1 ∈ ( 0 , ∞ ) , κ 2 ∈ [ 1 , 1 + ϕ ) {\displaystyle \kappa _{1}\in (0,\infty ),\kappa _{2}\in \left[1,1+\phi \right)} and n 0 ∈ [ 0 , ∞ ) {\displaystyle n_{0}\in [0,\infty )} where ϕ {\displaystyle \phi } is the golden ratio 1 2 ( 1 + 5 ) {\displaystyle {\tfrac {1}{2}}(1+{\sqrt {5}})} : the first two control the size of the truncation and the third is a slack variable that controls the size of the interval for the projection step. == Root finding problem == Given a continuous function f {\displaystyle f} defined from [ a , b ] {\displaystyle [a,b]} to R {\displaystyle \mathbb {R} } such that f ( a ) f ( b ) ≤ 0 {\displaystyle f(a)f(b)\leq 0} , where at the cost of one query one can access the values of f ( x ) {\displaystyle f(x)} on any given x {\displaystyle x} . And, given a pre-specified target precision ϵ > 0 {\displaystyle \epsilon >0} , a root-finding algorithm is designed to solve the following problem with the least amount of queries as possible: Problem Definition: Find x ^ {\displaystyle {\hat {x}}} such that | x ^ − x ∗ | ≤ ϵ {\displaystyle |{\hat {x}}-x^{*}|\leq \epsilon } , where x ∗ {\displaystyle x^{*}} satisfies f ( x ∗ ) = 0 {\displaystyle f(x^{*})=0} . This problem is very common in numerical analysis, computer science and engineering; and, root-finding algorithms are the standard approach to solve it. Often, the root-finding procedure is called by more complex parent algorithms within a larger context, and, for this reason solving root problems efficiently is of extreme importance since an inefficient approach might come at a high computational cost when the larger context is taken into account. This is what the ITP method attempts to do by simultaneously exploiting interpolation guarantees as well as minmax optimal guarantees of the bisection method that terminates in at most n 1 / 2 ≡ ⌈ log 2 ⁡ ( ( b 0 − a 0 ) / 2 ϵ ) ⌉ {\displaystyle n_{1/2}\equiv \lceil \log _{2}((b_{0}-a_{0})/2\epsilon )\rceil } iterations when initiated on an interval [ a 0 , b 0 ] {\displaystyle [a_{0},b_{0}]} . == The method == Given κ 1 ∈ ( 0 , ∞ ) , κ 2 ∈ [ 1 , 1 + ϕ ) {\displaystyle \kappa _{1}\in (0,\infty ),\kappa _{2}\in \left[1,1+\phi \right)} , n 1 / 2 ≡ ⌈ log 2 ⁡ ( ( b 0 − a 0 ) / 2 ϵ ) ⌉ {\displaystyle n_{1/2}\equiv \lceil \log _{2}((b_{0}-a_{0})/2\epsilon )\rceil } and n 0 ∈ [ 0 , ∞ ) {\displaystyle n_{0}\in [0,\infty )} where ϕ {\displaystyle \phi } is the golden ratio 1 2 ( 1 + 5 ) {\displaystyle {\tfrac {1}{2}}(1+{\sqrt {5}})} , in each iteration j = 0 , 1 , 2 … {\displaystyle j=0,1,2\dots } the ITP method calculates the point x ITP {\displaystyle x_{\text{ITP}}} following three steps: [Interpolation Step] Calculate the bisection and the regula falsi points: x 1 / 2 ≡ a + b 2 {\displaystyle x_{1/2}\equiv {\frac {a+b}{2}}} and x f ≡ b f ( a ) − a f ( b ) f ( a ) − f ( b ) {\displaystyle x_{f}\equiv {\frac {bf(a)-af(b)}{f(a)-f(b)}}} ; [Truncation Step] Perturb the estimator towards the center: x t ≡ x f + σ δ {\displaystyle x_{t}\equiv x_{f}+\sigma \delta } where σ ≡ sign ( x 1 / 2 − x f ) {\displaystyle \sigma \equiv {\text{sign}}(x_{1/2}-x_{f})} and δ ≡ min { κ 1 | b − a | κ 2 , | x 1 / 2 − x f | } {\displaystyle \delta \equiv \min\{\kappa _{1}|b-a|^{\kappa _{2}},|x_{1/2}-x_{f}|\}} ; [Projection Step] Project the estimator to minmax interval: x ITP ≡ x 1 / 2 − σ ρ k {\displaystyle x_{\text{ITP}}\equiv x_{1/2}-\sigma \rho _{k}} where ρ k ≡ min { ϵ 2 n 1 / 2 + n 0 − j − b − a 2 , | x t − x 1 / 2 | } {\displaystyle \rho _{k}\equiv \min \left\{\epsilon 2^{n_{1/2}+n_{0}-j}-{\frac {b-a}{2}},|x_{t}-x_{1/2}|\right\}} . The value of the function f ( x ITP ) {\displaystyle f(x_{\text{ITP}})} on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. === The algorithm === The following algorithm (written in pseudocode) assumes the initial values of y a {\displaystyle y_{a}} and y b {\displaystyle y_{b}} are given and satisfy y a < 0 < y b {\displaystyle y_{a}<0<y_{b}} where y a ≡ f ( a ) {\displaystyle y_{a}\equiv f(a)} and y b ≡ f ( b ) {\displaystyle y_{b}\equiv f(b)} ; and, it returns an estimate x ^ {\displaystyle {\hat {x}}} that satisfies | x ^ − x ∗ | ≤ ϵ {\displaystyle |{\hat {x}}-x^{*}|\leq \epsilon } in at most n 1 / 2 + n 0 {\displaystyle n_{1/2}+n_{0}} function evaluations. Input: a , b , ϵ , κ 1 , κ 2 , n 0 , f {\displaystyle a,b,\epsilon ,\kappa _{1},\kappa _{2},n_{0},f} Preprocessing: n 1 / 2 = ⌈ log 2 ⁡ b − a 2 ϵ ⌉ {\displaystyle n_{1/2}=\lceil \log _{2}{\tfrac {b-a}{2\epsilon }}\rceil } , n max = n 1 / 2 + n 0 {\displaystyle n_{\max }=n_{1/2}+n_{0}} , and j = 0 {\displaystyle j=0} ; While ( b − a > 2 ϵ {\displaystyle b-a>2\epsilon } ) Calculating Parameters: x 1 / 2 = a + b 2 {\displaystyle x_{1/2}={\tfrac {a+b}{2}}} , r = ϵ 2 n max − j − ( b − a ) / 2 {\displaystyle r=\epsilon 2^{n_{\max }-j}-(b-a)/2} , δ = κ 1 ( b − a ) κ 2 {\displaystyle \delta =\kappa _{1}(b-a)^{\kappa _{2}}} ; Interpolation: x f = y b a − y a b y b − y a {\displaystyle x_{f}={\tfrac {y_{b}a-y_{a}b}{y_{b}-y_{a}}}} ; Truncation: σ = sign ( x 1 / 2 − x f ) {\displaystyle \sigma ={\text{sign}}(x_{1/2}-x_{f})} ; If δ ≤ | x 1 / 2 − x f | {\displaystyle \delta \leq |x_{1/2}-x_{f}|} then x t = x f + σ δ {\displaystyle x_{t}=x_{f}+\sigma \delta } , Else x t = x 1 / 2 {\displaystyle x_{t}=x_{1/2}} ; Projection: If | x t − x 1 / 2 | ≤ r {\displaystyle |x_{t}-x_{1/2}|\leq r} then x ITP = x t {\displaystyle x_{\text{ITP}}=x_{t}} , Else x ITP = x 1 / 2 − σ r {\displaystyle x_{\text{ITP}}=x_{1/2}-\sigma r} ; Updating Interval: y ITP = f ( x ITP ) {\displaystyle y_{\text{ITP}}=f(x_{\text{ITP}})} ; If y ITP > 0 {\displaystyle y_{\text{ITP}}>0} then b = x I T P {\displaystyle b=x_{ITP}} and y b = y ITP {\displaystyle y_{b}=y_{\text{ITP}}} , Elseif y ITP < 0 {\displaystyle y_{\text{ITP}}<0} then a = x ITP {\displaystyle a=x_{\text{ITP}}} and y a = y ITP {\displaystyle y_{a}=y_{\text{ITP}}} , Else a = x ITP {\displaystyle a=x_{\text{ITP}}} and b = x ITP {\displaystyle b=x_{\text{ITP}}} ; j = j + 1 {\displaystyle j=j+1} ; Output: x ^ = a + b 2 {\displaystyle {\hat {x}}={\tfrac {a+b}{2}}} == Example: Finding the root of a polynomial == Suppose that the ITP method is used to find a root of the polynomial f ( x ) = x 3 − x − 2 . {\displaystyle f(x)=x^{3}-x-2\,.} Using ϵ = 0.0005 , κ 1 = 0.1 , κ 2 = 2 {\displaystyle \epsilon =0.0005,\kappa _{1}=0.1,\kappa _{2}=2} and n 0 = 1 {\displaystyle n_{0}=1} we find that: This example can be compared to Bisection method § Example: Finding the root of a polynomial. The ITP method required less than half the number of iterations than the bisection to obtain a more precise estimate of the root with no cost on the minmax guarantees. Other methods might also attain a similar speed of convergence (such as Ridders, Brent etc.) but without the minmax guarantees given by the ITP method. == Analysis == The main advantage of the ITP method is that it is guaranteed to require no more iterations than the bisection method when n 0 = 0 {\displaystyle n_{0}=0} . And so its average performance is guaranteed to be better than the bisection method even when interpolation fails. Furthermore, if interpolations do not fail (smooth functions), then it is guaranteed to enjoy the high order of convergence as interpolation based methods. === Worst case performance === Because the ITP method projects the estimator onto the minmax interval with a n 0 {\displaystyle n_{0}} slack, it will require at most n 1 / 2 + n 0 {\displaystyle n_{1/2}+n_{0}} iterations (Theorem 2.1 of ). This is minmax optimal like the bisection method when n 0 {\displaystyle n_{0}} is chosen to be n 0 = 0 {\displaystyle n_{0}=0} . === Average performance === Because it does not take more than n 1 / 2 + n 0 {\displaystyle n_{1/2}+n_{0}} iterations, the average number of iterations will always be less than that of the bisection method for any distribution considered when n 0 = 0 {\displaystyle n_{0}=0} (Corollary 2.2 of ). === Asymptotic performance === If the function f ( x ) {\displaystyle f(x)} is twice differentiable and the root x ∗ {\displaystyle x^{*}} is simple, then the intervals produced by the ITP method converges to 0 with an order of convergence of κ 2 {\displaystyle {\sqrt {\kappa _{2}}}} if n 0 ≠ 0 {\displaystyle n_{0}\neq 0} or if n 0 = 0 {\displaystyle n_{0}=0} and ( b − a ) / ϵ {\displaystyle (b-a)/\epsilon } is not a power of 2 with the term ϵ 2 n 1 / 2 b − a {\displaystyle {\tfrac {\epsilon 2^{n_{1/2}}}{b-a}}} not too close to zero (Theorem 2.3 of ). == Software == The itp contributed package in R. == See also == Bisection method Ridders' method Regula falsi Brent's method == Notes == == References == == External links == An Improved Bisection Method, by Kudos
Wikipedia/ITP_method
Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form f ( x ) = 0 {\displaystyle f(x)=0} . The method was published by Avram Sidi. The method is a generalization of the secant method. Like the secant method, it is an iterative method which requires one evaluation of f {\displaystyle f} in each iteration and no derivatives of f {\displaystyle f} . The method can converge much faster though, with an order which approaches 2 provided that f {\displaystyle f} satisfies the regularity conditions described below. == Algorithm == We call α {\displaystyle \alpha } the root of f {\displaystyle f} , that is, f ( α ) = 0 {\displaystyle f(\alpha )=0} . Sidi's method is an iterative method which generates a sequence { x i } {\displaystyle \{x_{i}\}} of approximations of α {\displaystyle \alpha } . Starting with k + 1 initial approximations x 1 , … , x k + 1 {\displaystyle x_{1},\dots ,x_{k+1}} , the approximation x k + 2 {\displaystyle x_{k+2}} is calculated in the first iteration, the approximation x k + 3 {\displaystyle x_{k+3}} is calculated in the second iteration, etc. Each iteration takes as input the last k + 1 approximations and the value of f {\displaystyle f} at those approximations. Hence the nth iteration takes as input the approximations x n , … , x n + k {\displaystyle x_{n},\dots ,x_{n+k}} and the values f ( x n ) , … , f ( x n + k ) {\displaystyle f(x_{n}),\dots ,f(x_{n+k})} . The number k must be 1 or larger: k = 1, 2, 3, .... It remains fixed during the execution of the algorithm. In order to obtain the starting approximations x 1 , … , x k + 1 {\displaystyle x_{1},\dots ,x_{k+1}} one could carry out a few initializing iterations with a lower value of k. The approximation x n + k + 1 {\displaystyle x_{n+k+1}} is calculated as follows in the nth iteration. A polynomial of interpolation p n , k ( x ) {\displaystyle p_{n,k}(x)} of degree k is fitted to the k + 1 points ( x n , f ( x n ) ) , … ( x n + k , f ( x n + k ) ) {\displaystyle (x_{n},f(x_{n})),\dots (x_{n+k},f(x_{n+k}))} . With this polynomial, the next approximation x n + k + 1 {\displaystyle x_{n+k+1}} of α {\displaystyle \alpha } is calculated as with p n , k ′ ( x n + k ) {\displaystyle p_{n,k}'(x_{n+k})} the derivative of p n , k {\displaystyle p_{n,k}} at x n + k {\displaystyle x_{n+k}} . Having calculated x n + k + 1 {\displaystyle x_{n+k+1}} one calculates f ( x n + k + 1 ) {\displaystyle f(x_{n+k+1})} and the algorithm can continue with the (n + 1)th iteration. Clearly, this method requires the function f {\displaystyle f} to be evaluated only once per iteration; it requires no derivatives of f {\displaystyle f} . The iterative cycle is stopped if an appropriate stopping criterion is met. Typically the criterion is that the last calculated approximation is close enough to the sought-after root α {\displaystyle \alpha } . To execute the algorithm effectively, Sidi's method calculates the interpolating polynomial p n , k ( x ) {\displaystyle p_{n,k}(x)} in its Newton form. == Convergence == Sidi showed that if the function f {\displaystyle f} is (k + 1)-times continuously differentiable in an open interval I {\displaystyle I} containing α {\displaystyle \alpha } (that is, f ∈ C k + 1 ( I ) {\displaystyle f\in C^{k+1}(I)} ), α {\displaystyle \alpha } is a simple root of f {\displaystyle f} (that is, f ′ ( α ) ≠ 0 {\displaystyle f'(\alpha )\neq 0} ) and the initial approximations x 1 , … , x k + 1 {\displaystyle x_{1},\dots ,x_{k+1}} are chosen close enough to α {\displaystyle \alpha } , then the sequence { x i } {\displaystyle \{x_{i}\}} converges to α {\displaystyle \alpha } , meaning that the following limit holds: lim n → ∞ x n = α {\displaystyle \lim \limits _{n\to \infty }x_{n}=\alpha } . Sidi furthermore showed that lim n → ∞ x n + 1 − α ∏ i = 0 k ( x n − i − α ) = L = ( − 1 ) k + 1 ( k + 1 ) ! f ( k + 1 ) ( α ) f ′ ( α ) , {\displaystyle \lim _{n\to \infty }{\frac {x_{n+1}-\alpha }{\prod _{i=0}^{k}(x_{n-i}-\alpha )}}=L={\frac {(-1)^{k+1}}{(k+1)!}}{\frac {f^{(k+1)}(\alpha )}{f'(\alpha )}},} and that the sequence converges to α {\displaystyle \alpha } of order ψ k {\displaystyle \psi _{k}} , i.e. lim n → ∞ | x n + 1 − α | | x n − α | ψ k = | L | ( ψ k − 1 ) / k {\displaystyle \lim \limits _{n\to \infty }{\frac {|x_{n+1}-\alpha |}{|x_{n}-\alpha |^{\psi _{k}}}}=|L|^{(\psi _{k}-1)/k}} The order of convergence ψ k {\displaystyle \psi _{k}} is the only positive root of the polynomial s k + 1 − s k − s k − 1 − ⋯ − s − 1 {\displaystyle s^{k+1}-s^{k}-s^{k-1}-\dots -s-1} We have e.g. ψ 1 = ( 1 + 5 ) / 2 {\displaystyle \psi _{1}=(1+{\sqrt {5}})/2} ≈ 1.6180, ψ 2 {\displaystyle \psi _{2}} ≈ 1.8393 and ψ 3 {\displaystyle \psi _{3}} ≈ 1.9276. The order approaches 2 from below if k becomes large: lim k → ∞ ψ k = 2 {\displaystyle \lim \limits _{k\to \infty }\psi _{k}=2} == Related algorithms == Sidi's method reduces to the secant method if we take k = 1. In this case the polynomial p n , 1 ( x ) {\displaystyle p_{n,1}(x)} is the linear approximation of f {\displaystyle f} around α {\displaystyle \alpha } which is used in the nth iteration of the secant method. We can expect that the larger we choose k, the better p n , k ( x ) {\displaystyle p_{n,k}(x)} is an approximation of f ( x ) {\displaystyle f(x)} around x = α {\displaystyle x=\alpha } . Also, the better p n , k ′ ( x ) {\displaystyle p_{n,k}'(x)} is an approximation of f ′ ( x ) {\displaystyle f'(x)} around x = α {\displaystyle x=\alpha } . If we replace p n , k ′ {\displaystyle p_{n,k}'} with f ′ {\displaystyle f'} in (1) we obtain that the next approximation in each iteration is calculated as This is the Newton–Raphson method. It starts off with a single approximation x 1 {\displaystyle x_{1}} so we can take k = 0 in (2). It does not require an interpolating polynomial but instead one has to evaluate the derivative f ′ {\displaystyle f'} in each iteration. Depending on the nature of f {\displaystyle f} this may not be possible or practical. Once the interpolating polynomial p n , k ( x ) {\displaystyle p_{n,k}(x)} has been calculated, one can also calculate the next approximation x n + k + 1 {\displaystyle x_{n+k+1}} as a solution of p n , k ( x ) = 0 {\displaystyle p_{n,k}(x)=0} instead of using (1). For k = 1 these two methods are identical: it is the secant method. For k = 2 this method is known as Muller's method. For k = 3 this approach involves finding the roots of a cubic function, which is unattractively complicated. This problem becomes worse for even larger values of k. An additional complication is that the equation p n , k ( x ) = 0 {\displaystyle p_{n,k}(x)=0} will in general have multiple solutions and a prescription has to be given which of these solutions is the next approximation x n + k + 1 {\displaystyle x_{n+k+1}} . Muller does this for the case k = 2 but no such prescriptions appear to exist for k > 2. == References ==
Wikipedia/Sidi's_generalized_secant_method
In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich Gräffe also discovered the principal idea of the method. The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally, Viète's formulas are used in order to approximate the roots. == Dandelin–Graeffe iteration == Let p(x) be a polynomial of degree n p ( x ) = ( x − x 1 ) ⋯ ( x − x n ) . {\displaystyle p(x)=(x-x_{1})\cdots (x-x_{n}).} Then p ( − x ) = ( − 1 ) n ( x + x 1 ) ⋯ ( x + x n ) . {\displaystyle p(-x)=(-1)^{n}(x+x_{1})\cdots (x+x_{n}).} Let q(x) be the polynomial which has the squares x 1 2 , ⋯ , x n 2 {\displaystyle x_{1}^{2},\cdots ,x_{n}^{2}} as its roots, q ( x ) = ( x − x 1 2 ) ⋯ ( x − x n 2 ) . {\displaystyle q(x)=\left(x-x_{1}^{2}\right)\cdots \left(x-x_{n}^{2}\right).} Then we can write: q ( x 2 ) = ( x 2 − x 1 2 ) ⋯ ( x 2 − x n 2 ) = ( x − x 1 ) ( x + x 1 ) ⋯ ( x − x n ) ( x + x n ) = { ( x − x 1 ) ⋯ ( x − x n ) } × { ( x + x 1 ) ⋯ ( x + x n ) } = p ( x ) × { ( − 1 ) n ( − x − x 1 ) ⋯ ( − x − x n ) } = p ( x ) × { ( − 1 ) n p ( − x ) } = ( − 1 ) n p ( x ) p ( − x ) {\displaystyle {\begin{aligned}q(x^{2})&=\left(x^{2}-x_{1}^{2}\right)\cdots \left(x^{2}-x_{n}^{2}\right)\\&=(x-x_{1})(x+x_{1})\cdots (x-x_{n})(x+x_{n})\\&=\left\{(x-x_{1})\cdots (x-x_{n})\right\}\times \left\{(x+x_{1})\cdots (x+x_{n})\right\}\\&=p(x)\times \left\{(-1)^{n}(-x-x_{1})\cdots (-x-x_{n})\right\}\\&=p(x)\times \left\{(-1)^{n}p(-x)\right\}\\&=(-1)^{n}p(x)p(-x)\end{aligned}}} q(x) can now be computed by algebraic operations on the coefficients of the polynomial p(x) alone. Let: p ( x ) = x n + a 1 x n − 1 + ⋯ + a n − 1 x + a n q ( x ) = x n + b 1 x n − 1 + ⋯ + b n − 1 x + b n {\displaystyle {\begin{aligned}p(x)&=x^{n}+a_{1}x^{n-1}+\cdots +a_{n-1}x+a_{n}\\q(x)&=x^{n}+b_{1}x^{n-1}+\cdots +b_{n-1}x+b_{n}\end{aligned}}} then the coefficients are related by b k = ( − 1 ) k a k 2 + 2 ∑ j = 0 k − 1 ( − 1 ) j a j a 2 k − j , a 0 = b 0 = 1. {\displaystyle b_{k}=(-1)^{k}a_{k}^{2}+2\sum _{j=0}^{k-1}(-1)^{j}\,a_{j}a_{2k-j},\qquad a_{0}=b_{0}=1.} Graeffe observed that if one separates p(x) into its odd and even parts: p ( x ) = p e ( x 2 ) + x p o ( x 2 ) , {\displaystyle p(x)=p_{e}\left(x^{2}\right)+xp_{o}\left(x^{2}\right),} then one obtains a simplified algebraic expression for q(x): q ( x ) = ( − 1 ) n ( p e ( x ) 2 − x p o ( x ) 2 ) . {\displaystyle q(x)=(-1)^{n}\left(p_{e}(x)^{2}-xp_{o}(x)^{2}\right).} This expression involves the squaring of two polynomials of only half the degree, and is therefore used in most implementations of the method. Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating k times gives a polynomial of degree n: q k ( y ) = y n + a k 1 y n − 1 + ⋯ + a k n − 1 y + a k n {\displaystyle q^{k}(y)=y^{n}+{a^{k}}_{1}\,y^{n-1}+\cdots +{a^{k}}_{n-1}\,y+{a^{k}}_{n}\,} with roots y 1 = x 1 2 k , y 2 = x 2 2 k , … , y n = x n 2 k . {\displaystyle y_{1}=x_{1}^{2^{k}},\,y_{2}=x_{2}^{2^{k}},\,\dots ,\,y_{n}=x_{n}^{2^{k}}.} If the magnitudes of the roots of the original polynomial were separated by some factor ρ > 1 {\displaystyle \rho >1} , that is, | x k | ≥ ρ | x k + 1 | {\displaystyle |x_{k}|\geq \rho |x_{k+1}|} , then the roots of the k-th iterate are separated by a fast growing factor ρ 2 k ≥ 1 + 2 k ( ρ − 1 ) {\displaystyle \rho ^{2^{k}}\geq 1+2^{k}(\rho -1)} . == Classical Graeffe's method == Next the Vieta relations are used a 1 k = − ( y 1 + y 2 + ⋯ + y n ) a 2 k = y 1 y 2 + y 1 y 3 + ⋯ + y n − 1 y n ⋮ a n k = ( − 1 ) n ( y 1 y 2 ⋯ y n ) . {\displaystyle {\begin{aligned}a_{\;1}^{k}&=-(y_{1}+y_{2}+\cdots +y_{n})\\a_{\;2}^{k}&=y_{1}y_{2}+y_{1}y_{3}+\cdots +y_{n-1}y_{n}\\&\;\vdots \\a_{\;n}^{k}&=(-1)^{n}(y_{1}y_{2}\cdots y_{n}).\end{aligned}}} If the roots x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} are sufficiently separated, say by a factor ρ > 1 {\displaystyle \rho >1} , | x m | ≥ ρ | x m + 1 | {\displaystyle |x_{m}|\geq \rho |x_{m+1}|} , then the iterated powers y 1 , y 2 , . . . , y n {\displaystyle y_{1},y_{2},...,y_{n}} of the roots are separated by the factor ρ 2 k {\displaystyle \rho ^{2^{k}}} , which quickly becomes very big. The coefficients of the iterated polynomial can then be approximated by their leading term, a 1 k ≈ − y 1 {\displaystyle a_{\;1}^{k}\approx -y_{1}} a 2 k ≈ y 1 y 2 {\displaystyle a_{\;2}^{k}\approx y_{1}y_{2}} and so on, implying y 1 ≈ − a 1 k , y 2 ≈ − a 2 k / a 1 k , … y n ≈ − a n k / a n − 1 k . {\displaystyle y_{1}\approx -a_{\;1}^{k},\;y_{2}\approx -a_{\;2}^{k}/a_{\;1}^{k},\;\dots \;y_{n}\approx -a_{\;n}^{k}/a_{\;n-1}^{k}.} Finally, logarithms are used in order to find the absolute values of the roots of the original polynomial. These magnitudes alone are already useful to generate meaningful starting points for other root-finding methods. To also obtain the angle of these roots, a multitude of methods has been proposed, the most simple one being to successively compute the square root of a (possibly complex) root of q m ( y ) {\displaystyle q^{m}(y)} , m ranging from k to 1, and testing which of the two sign variants is a root of q m − 1 ( x ) {\displaystyle q^{m-1}(x)} . Before continuing to the roots of q m − 2 ( x ) {\displaystyle q^{m-2}(x)} , it might be necessary to numerically improve the accuracy of the root approximations for q m − 1 ( x ) {\displaystyle q^{m-1}(x)} , for instance by Newton's method. Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and coefficients, and roots with higher multiplicity. For instance, it has been observed that for a root x ℓ + 1 = x ℓ + 2 = ⋯ = x ℓ + d {\displaystyle x_{\ell +1}=x_{\ell +2}=\dots =x_{\ell +d}} with multiplicity d, the fractions | ( a ℓ + i m − 1 ) 2 a ℓ + i m | {\displaystyle \left|{\frac {(a_{\;\ell +i}^{m-1})^{2}}{a_{\;\ell +i}^{m}}}\right|} tend to ( d i ) {\displaystyle {\binom {d}{i}}} for i = 0 , 1 , … , d {\displaystyle i=0,1,\dots ,d} . This allows to estimate the multiplicity structure of the set of roots. From a numerical point of view, this method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second, but minor concern is that many different polynomials lead to the same Graeffe iterates. == Tangential Graeffe method == This method replaces the numbers by truncated power series of degree 1, also known as dual numbers. Symbolically, this is achieved by introducing an "algebraic infinitesimal" ε {\displaystyle \varepsilon } with the defining property ε 2 = 0 {\displaystyle \varepsilon ^{2}=0} . Then the polynomial p ( x + ε ) = p ( x ) + ε p ′ ( x ) {\displaystyle p(x+\varepsilon )=p(x)+\varepsilon \,p'(x)} has roots x m − ε {\displaystyle x_{m}-\varepsilon } , with powers ( x m − ε ) 2 k = x m 2 k − ε 2 k x m 2 k − 1 = y m + ε y ˙ m . {\displaystyle (x_{m}-\varepsilon )^{2^{k}}=x_{m}^{2^{k}}-\varepsilon \,{2^{k}}\,x_{m}^{2^{k}-1}=y_{m}+\varepsilon \,{\dot {y}}_{m}.} Thus the value of x m {\displaystyle x_{m}} is easily obtained as fraction x m = − 2 k y m y ˙ m . {\displaystyle x_{m}=-{\tfrac {2^{k}\,y_{m}}{{\dot {y}}_{m}}}.} This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration. == Renormalization == Every polynomial can be scaled in domain and range such that in the resulting polynomial the first and the last coefficient have size one. If the size of the inner coefficients is bounded by M, then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by n M 2 {\displaystyle nM^{2}} . After k stages one gets the bound n 2 k − 1 M 2 k {\displaystyle n^{2^{k}-1}M^{2^{k}}} for the inner coefficients. To overcome the limit posed by the growth of the powers, Malajovich–Zubelli propose to represent coefficients and intermediate results in the kth stage of the algorithm by a scaled polar form c = α e − 2 k r , {\displaystyle c=\alpha \,e^{-2^{k}\,r},} where α = c | c | {\displaystyle \alpha ={\frac {c}{|c|}}} is a complex number of unit length and r = − 2 − k log ⁡ | c | {\displaystyle r=-2^{-k}\log |c|} is a positive real. Splitting off the power 2 k {\displaystyle 2^{k}} in the exponent reduces the absolute value of c to the corresponding dyadic root. Since this preserves the magnitude of the (representation of the) initial coefficients, this process was named renormalization. Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization c 3 = c 1 + c 2 = | c 1 | ⋅ ( α 1 + α 2 | c 2 | | c 1 | ) {\displaystyle c_{3}=c_{1}+c_{2}=|c_{1}|\cdot \left(\alpha _{1}+\alpha _{2}{\tfrac {|c_{2}|}{|c_{1}|}}\right)} , where c 1 {\displaystyle c_{1}} is chosen as the larger of both numbers, that is, r 1 < r 2 {\displaystyle r_{1}<r_{2}} . Thus α 3 = s | s | {\displaystyle \alpha _{3}={\tfrac {s}{|s|}}} and r 3 = r 1 + 2 − k log ⁡ | s | {\displaystyle r_{3}=r_{1}+2^{-k}\,\log {|s|}} with s = α 1 + α 2 e 2 k ( r 1 − r 2 ) . {\displaystyle s=\alpha _{1}+\alpha _{2}\,e^{2^{k}(r_{1}-r_{2})}.} The coefficients a 0 , a 1 , … , a n {\displaystyle a_{0},a_{1},\dots ,a_{n}} of the final stage k of the Graeffe iteration, for some reasonably large value of k, are represented by pairs ( α m , r m ) {\displaystyle (\alpha _{m},r_{m})} , m = 0 , … , n {\displaystyle m=0,\dots ,n} . By identifying the corners of the convex envelope of the point set { ( m , r m ) : m = 0 , … , n } {\displaystyle \{(m,r_{m}):\;m=0,\dots ,n\}} one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial. == See also == Root-finding algorithm == References == Weisstein, Eric W. "Graeffe's Method". MathWorld. Malajovich, Gregorio; Zubelli, Jorge P. (2001). "Tangent Graeffe iteration". Numerische Mathematik. 89 (4): 749–782. CiteSeerX 10.1.1.44.3611. doi:10.1007/s002110100278. S2CID 100025.
Wikipedia/Graeffe's_method
In numerical analysis, Steffensen's method is an iterative method for numerical root-finding named after Johan Frederik Steffensen that is similar to the secant method and to Newton's method. Steffensen's method achieves a quadratic order of convergence without using derivatives, whereas Newton's method converges quadratically but requires derivatives and the secant method does not require derivatives but also converges less quickly than quadratically. Steffensen's method has the drawback that it requires two function evaluations per step, whereas the secant method requires only one evaluation per step, so it is not necessarily most efficient in terms of computational cost, depending on the number of iterations each requires. Newton's method also requires evaluating two functions per step – for the function and for its derivative – and its computational cost varies between being the same as Steffensen's method (for most functions, where calculation of the derivative is just as computationally costly as the original function). Steffensen's method can be derived as an adaptation of Aitken's delta-squared process applied to fixed-point iteration. Viewed in this way, Steffensen's method naturally generalizes to efficient fixed-point calculation in general Banach spaces, whenever fixed points are guaranteed to exist and fixed-point iteration is guaranteed to converge, although possibly slowly, by the Banach fixed-point theorem. == Simple description == The simplest form of the formula for Steffensen's method occurs when it is used to find a zero of a real function f {\displaystyle f} ; that is, to find the real value x ⋆ {\displaystyle x_{\star }} that satisfies f ( x ⋆ ) = 0 {\displaystyle f(x_{\star })=0} . Near the solution x ⋆ {\displaystyle x_{\star }} , the derivative of the function, f ′ {\displaystyle f'} , is supposed to approximately satisfy − 1 < f ′ ( x ⋆ ) < 0 {\displaystyle -1<f'(x_{\star })<0} ; this condition ensures that f {\displaystyle f} is an adequate correction-function for x {\displaystyle x} , for finding its own solution, although it is not required to work efficiently. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value x 0 {\displaystyle x_{0}} must be very close to the actual solution x ⋆ {\displaystyle x_{\star }} , and convergence to the solution may be slow. Adjustment of the size of the method's intermediate step, mentioned later, can improve convergence in some of these cases. Given an adequate starting value x 0 {\displaystyle x_{0}} , a sequence of values x 0 , x 1 , x 2 , … , x n , … {\displaystyle x_{0},\ x_{1},\ x_{2},\ \dots ,\ x_{n},\ \dots } can be generated using the formula below. When it works, each value in the sequence is much closer to the solution x ⋆ {\displaystyle x_{\star }} than the prior value. The value x n {\displaystyle x_{n}} from the current step generates the value x n + 1 {\displaystyle x_{n+1}} for the next step, via the formula x n + 1 = x n − f ( x n ) g ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{g(x_{n})}}} for n = 0 , 1 , 2 , 3 , . . . {\displaystyle n=0,1,2,3,...} , where the slope function g ( x ) {\displaystyle g(x)} is a composite of the original function f {\displaystyle f} given by the formula g ( x ) = f ( x + f ( x ) ) f ( x ) − 1 {\displaystyle g(x)={\frac {f{\bigl (}x+f(x){\bigr )}}{f(x)}}-1} or perhaps more clearly, g ( x ) = f ( x + h ) − f ( x ) h ≈ d ⁡ f ( x ) d ⁡ x ≡ f ′ ( x ) , {\displaystyle g(x)={\frac {f(x+h)-f(x)}{h}}\approx {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\equiv f'(x),} where h = f ( x ) {\displaystyle h=f(x)} is a step-size between the last iteration point, x {\displaystyle x} , and an auxiliary point located at x + h {\displaystyle x+h} . Technically, the function g {\displaystyle g} is called the first-order divided difference of f {\displaystyle f} between those two points (it is either a forward-type or backward-type divided difference, depending on the sign of h {\displaystyle h} ). Practically, it is the averaged value of the slope f ′ {\displaystyle f'} of the function f {\displaystyle f} between the last sequence point ( x , y ) = ( x n , f ( x n ) ) {\displaystyle \left(x,y\right)={\bigl (}x_{n},f\left(x_{n}\right){\bigr )}} and the auxiliary point at ( x , y ) = ( x n + h , f ( x n + h ) ) {\displaystyle {\bigl (}x,y{\bigr )}={\bigl (}x_{n}+h,f\left(x_{n}+h\right){\bigr )}} , with the size of the intermediate step (and its direction) given by h = f ( x n ) {\displaystyle h=f(x_{n})} . Because the value of g {\displaystyle g} is an approximation for f ′ {\displaystyle f'} , its value can optionally be checked to see if it meets the condition − 1 < g < 0 {\displaystyle -1<g<0} , which is required to guarantee convergence of Steffensen's algorithm. Although slight non-conformance may not necessarily be dire, any large departure from the condition warns that Steffensen's method is liable to fail, and temporary use of some fallback algorithm is warranted (e.g. the more robust Illinois algorithm, or plain regula falsi). It is only for the purpose of finding h {\displaystyle h} for this auxiliary point that the value of the function f {\displaystyle f} must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that − 1 < f ′ ( x ⋆ ) < 0 {\displaystyle -1<f'(x_{\star })<0} . For all other parts of the calculation, Steffensen's method only requires the function f {\displaystyle f} to be continuous and to actually have a nearby solution. Several modest modifications of the step h {\displaystyle h} in the formula for the slope g {\displaystyle g} exist, such as multiplying it by ⁠ 1 /2⁠ or ⁠ 3 /4⁠, to accommodate functions f {\displaystyle f} that do not quite meet the requirement. == Advantages and drawbacks == The main advantage of Steffensen's method is that it has quadratic convergence like Newton's method – that is, both methods find roots to an equation f {\displaystyle f} just as "quickly". In this case, quickly means that for both methods, the number of correct digits in the answer doubles with each step. But the formula for Newton's method requires evaluation of the function's derivative f ′ {\displaystyle f'} as well as the function f {\displaystyle f} , while Steffensen's method only requires f {\displaystyle f} itself. This is important when the derivative is not easily or efficiently available. The price for the quick convergence is the double function evaluation: Both f ( x n ) {\displaystyle f(x_{n})} and f ( x n + h ) {\displaystyle f(x_{n}+h)} must be calculated, which might be time-consuming if f {\displaystyle f} is complicated. For comparison, the secant method needs only one function evaluation per step. The secant method increases the number of correct digits by "only" a factor of roughly 1.6 per step, but one can do twice as many steps of the secant method within a given time. Since the secant method can carry out twice as many steps in the same time as Steffensen's method, in practical use the secant method actually converges faster than Steffensen's method, when both algorithms succeed: the secant method achieves a factor of about (1.6)2 ≈ 2.6 times as many digits for every two steps (two function evaluations), compared to Steffensen's factor of 2 for every one step (two function evaluations). Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen's method is choosing an "adequate" starting value x 0 {\displaystyle x_{0}} . If the value of x 0 {\displaystyle x_{0}} is not "close enough" to the actual solution x ⋆ {\displaystyle x_{\star }} , the method may fail, and the sequence of values x 0 , x 1 , x 2 , x 3 , … {\displaystyle x_{0},\,x_{1},\,x_{2},\,x_{3},\,\dots } may either erratically flip-flop between two extremes, or diverge to infinity, or both. == Derivation using Aitken's delta-squared process == The version of Steffensen's method implemented in the MATLAB code shown below can be found using Aitken's delta-squared process for convergence acceleration. To compare the following formulae to the formulae in the section above, notice that x n = p − p n {\displaystyle x_{n}=p-p_{n}} . This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of p n , p n + 1 , p n + 2 {\displaystyle p_{n},\,p_{n+1},\,p_{n+2}} agree and p n {\displaystyle p_{n}} is "sufficiently close" to the desired limit of the sequence p {\displaystyle p} , then we can assume p n + 1 − p p n − p ≈ p n + 2 − p p n + 1 − p , {\displaystyle {\frac {p_{n+1}-p}{p_{n}-p}}\approx {\frac {p_{n+2}-p}{p_{n+1}-p}},} so that ( p n + 2 − 2 p n + 1 + p n ) p ≈ p n + 2 p n − p n + 1 2 . {\displaystyle (p_{n+2}-2p_{n+1}+p_{n})p\approx p_{n+2}p_{n}-p_{n+1}^{2}.} Solving for the desired limit of the sequence p {\displaystyle p} gives: p ≈ p n + 2 p n − p n + 1 2 p n + 2 − 2 p n + 1 + p n {\displaystyle p\approx {\frac {p_{n+2}p_{n}-p_{n+1}^{2}}{p_{n+2}-2p_{n+1}+p_{n}}}} = ( p n 2 + p n p n + 2 − 2 p n p n + 1 ) − ( p n 2 − 2 p n p n + 1 + p n + 1 2 ) p n + 2 − 2 p n + 1 + p n {\displaystyle =~{\frac {\,(\,p_{n}^{2}+p_{n}\,p_{n+2}-2\,p_{n}\,p_{n+1}\,)-(\,p_{n}^{2}-2\,p_{n}\,p_{n+1}+p_{n+1}^{2}\,)\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}} = p n − ( p n + 1 − p n ) 2 p n + 2 − 2 p n + 1 + p n , {\displaystyle =p_{n}-{\frac {(p_{n+1}-p_{n})^{2}}{p_{n+2}-2p_{n+1}+p_{n}}},} which results in the more rapidly convergent sequence: p ≈ p n + 3 = p n − ( p n + 1 − p n ) 2 p n + 2 − 2 p n + 1 + p n . {\displaystyle p\approx p_{n+3}=p_{n}-{\frac {(p_{n+1}-p_{n})^{2}}{p_{n+2}-2p_{n+1}+p_{n}}}.} == Code example == === In Matlab === Here is the source for an implementation of Steffensen's Method in MATLAB. === In Python === Here is the source for an implementation of Steffensen's method in Python. == Generalization to Banach space == Steffensen's method can also be used to find an input x = x ⋆ {\displaystyle x=x_{\star }} for a different kind of function F {\displaystyle F} that produces output the same as its input: x ⋆ = F ( x ⋆ ) {\displaystyle x_{\star }=F(x_{\star })} for the special value x ⋆ {\displaystyle x_{\star }} . Solutions like x ⋆ {\displaystyle x_{\star }} are called fixed points. Many of these functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates this convergence, to make it quadratic. For orientation, the root function f {\displaystyle f} and the fixed-point functions are simply related by F ( x ) = x + ε f ( x ) {\displaystyle F(x)=x+\varepsilon f(x)} , where ε {\displaystyle \varepsilon } is some scalar constant small enough in magnitude to make F {\displaystyle F} stable under iteration, but large enough for the non-linearity of the function f {\displaystyle f} to be appreciable. All issues of a more general Banach space vs. basic real numbers being momentarily ignored for the sake of the comparison. This method for finding fixed points of a real-valued function has been generalized for functions F : X → X {\displaystyle F:X\to X} that map a Banach space X {\displaystyle X} onto itself or even more generally F : X → Y {\displaystyle F:X\to Y} that map from one Banach space X {\displaystyle X} into another Banach space Y {\displaystyle Y} . The generalized method assumes that a family of bounded linear operators { G ( u , v ) : u , v ∈ X } {\displaystyle \{G(u,v):u,v\in X\}} associated with u {\displaystyle u} and v {\displaystyle v} can be devised that (locally) satisfies the condition If division is possible in the Banach space, then the linear operator G {\displaystyle G} can be obtained from G ( u , v ) = [ F ( u ) − F ( v ) ] [ u − v ] − 1 , {\displaystyle G\left(u,v\right)={\bigl [}F\left(u\right)-F\left(v\right){\bigr ]}{\bigl [}u-v{\bigr ]}^{-1},} which may provide some insight: Expressed in this way, the linear operator G {\displaystyle G} can be more easily seen to be an elaborate version of the divided difference g {\displaystyle g} discussed in the first section, above. The quotient form is shown here for orientation only; it is not required per se. Note also that division within the Banach space is not necessary for the elaborated Steffensen's method to be viable; the only requirement is that the operator G {\displaystyle G} satisfy (1). For the basic real number function f {\displaystyle f} , given in the first section, the function simply takes in and puts out real numbers. There, the function g {\displaystyle g} is a divided difference. In the generalized form here, the operator G {\displaystyle G} is the analogue of a divided difference for use in the Banach space. The operator G {\displaystyle G} is roughly equivalent to a matrix whose entries are all functions of vector arguments u {\displaystyle u} and v {\displaystyle v} . Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference G ( F ( x ) , x ) {\displaystyle G{\bigl (}F\left(x\right),x{\bigr )}} instead of the derivative F ′ ( x ) {\displaystyle F'(x)} . Note that for arguments x {\displaystyle x} close to some fixed point x ⋆ {\displaystyle x_{\star }} , fixed point functions F {\displaystyle F} and their linear operators G {\displaystyle G} meeting condition (1), F ′ ( x ) ≈ G ( F ( x ) , x ) ≈ I {\displaystyle F'(x)\approx G{\bigl (}F\left(x\right),x{\bigr )}\approx I} , where I {\displaystyle I} is the identity operator. In the case that division is possible in the Banach space, the generalized iteration formula is given by x n + 1 = x n + [ I − G ( F ( x n ) , x n ) ] − 1 [ F ( x n ) − x n ] , {\displaystyle x_{n+1}=x_{n}+{\Bigl [}I-G{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}^{-1}{\Bigl [}F\left(x_{n}\right)-x_{n}{\Bigr ]},} for n = 1 , 2 , 3 , . . . {\displaystyle n=1,\,2,\,3,\,...} . In the more general case in which division may not be possible, the iteration formula requires finding a solution x n + 1 {\displaystyle x_{n+1}} close to x n {\displaystyle x_{n}} for which [ I − G ( F ( x n ) , x n ) ] [ x n + 1 − x n ] = F ( x n ) − x n . {\displaystyle {\Bigl [}I-G{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}{\Bigl [}x_{n+1}-x_{n}{\Bigr ]}=F\left(x_{n}\right)-x_{n}.} Equivalently, one may seek the solution x n + 1 {\displaystyle x_{n+1}} to the somewhat reduced form [ I − G ( F ( x n ) , x n ) ] x n + 1 = [ F ( x n ) − G ( F ( x n ) , x n ) x n ] , {\displaystyle {\Bigl [}I-G{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}x_{n+1}={\Bigl [}F\left(x_{n}\right)-G{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}\ x_{n}{\Bigr ]},} with all the values inside square brackets being independent of x n + 1 {\displaystyle x_{n+1}} : the bracketed terms all only depend on x n {\displaystyle x_{n}} . However, that the second form may not be as numerically stable as the first: because the first form involves finding a value for a (hopefully) small difference, it may be numerically more likely to avoid excessively large or erratic changes to the iterated value x n {\displaystyle x_{n}} . If the linear operator G {\displaystyle G} satisfies ‖ G ( u , v ) − G ( x , y ) ‖ ≤ k ( ‖ u − x ‖ + ‖ v − y ‖ ) {\displaystyle {\Bigl \|}G\left(u,v\right)-G\left(x,y\right){\Bigr \|}\leq k{\biggl (}{\Bigl \|}u-x{\Bigr \|}+{\Bigr \|}v-y{\Bigr \|}{\biggr )}} for some positive real constant k {\displaystyle k} , then the method converges quadratically to a fixed point of F {\displaystyle F} if the initial approximation x 0 {\displaystyle x_{0}} is "sufficiently close" to the desired solution x ⋆ {\displaystyle x_{\star }} that satisfies x ⋆ = F ( x ⋆ ) {\displaystyle x_{\star }=F(x_{\star })} . == Notes == == References ==
Wikipedia/Steffensen's_method
In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex plane. It uses the Schur-Cohn test to test increasingly smaller disks for the presence or absence of roots. == Schur-Cohn algorithm == This algorithm allows one to find the distribution of the roots of a complex polynomial with respect to the unit circle in the complex plane. It is based on two auxiliary polynomials, introduced by Schur. For a complex polynomial p {\displaystyle p} of degree n {\displaystyle n} its reciprocal adjoint polynomial p ∗ {\displaystyle p^{*}} is defined by p ∗ ( z ) = z n p ( z ¯ − 1 ) ¯ {\displaystyle p^{*}(z)=z^{n}{\overline {p({\bar {z}}^{-1})}}} and its Schur Transform T p {\displaystyle Tp} by T p = p ( 0 ) ¯ p − p ∗ ( 0 ) ¯ p ∗ , {\displaystyle Tp={\overline {p(0)}}p-{\overline {p^{*}(0)}}p^{*},} where a bar denotes complex conjugation. So, if p ( z ) = a n z n + ⋯ + a 1 z + a 0 {\displaystyle p(z)=a_{n}z^{n}+\cdots +a_{1}z+a_{0}} with a n ≠ 0 {\displaystyle a_{n}\neq 0} , then p ∗ ( z ) = a ¯ 0 z n + a ¯ 1 z n − 1 + ⋯ + a ¯ n {\displaystyle p^{*}(z)={\bar {a}}_{0}z^{n}+{\bar {a}}_{1}z^{n-1}+\cdots +{\bar {a}}_{n}} , with leading zero-terms, if any, removed. The coefficients of T p {\displaystyle Tp} can therefore be directly expressed in those of p {\displaystyle p} and, since one or more leading coefficients cancel, T p {\displaystyle Tp} has lower degree than p {\displaystyle p} . The roots of p {\displaystyle p} , p ∗ {\displaystyle p^{*}} , and T p {\displaystyle Tp} are related as follows. Lemma Let p {\displaystyle p} be a complex polynomial and δ = ( T p ) ( 0 ) {\displaystyle \delta =(Tp)(0)} . The roots of p ∗ {\displaystyle p^{*}} , including their multiplicities, are the images under inversion in the unit circle of the non-zero roots of p {\displaystyle p} . If δ ≠ 0 {\displaystyle \delta \neq 0} , then p , p ∗ {\displaystyle p,\,p^{*}} , and T p {\displaystyle Tp} share roots on the unit circle, including their multiplicities. If δ > 0 {\displaystyle \delta >0} , then p {\displaystyle p} and T p {\displaystyle Tp} have the same number of roots inside the unit circle. If δ < 0 {\displaystyle \delta <0} , then p ∗ {\displaystyle p^{*}} and T p {\displaystyle Tp} have the same number of roots inside the unit circle. Proof For z ≠ 0 {\displaystyle z\neq 0} we have p ∗ ( z ) = z n p ( z / | z | 2 ) ¯ {\displaystyle p^{*}(z)=z^{n}{\overline {p(z/|z|^{2})}}} and, in particular, | p ∗ ( z ) | = | p ( z ) | {\displaystyle |p^{*}(z)|=|p(z)|} for | z | = 1 {\displaystyle |z|=1} . Also δ ≠ 0 {\displaystyle \delta \neq 0} implies | p ( 0 ) | ≠ | p ∗ ( 0 ) | {\displaystyle |p(0)|\neq |p^{*}(0)|} . From this and the definitions above the first two statements follow. The other two statements are a consequence of Rouché's theorem applied on the unit circle to the functions p ( 0 ) ¯ p ( z ) / r ( z ) {\displaystyle {\overline {p(0)}}p(z)/r(z)} and − p ∗ ( 0 ) ¯ p ∗ ( z ) / r ( z ) {\displaystyle -{\overline {p^{*}(0)}}p^{*}(z)/r(z)} , where r {\displaystyle r} is a polynomial that has as its roots the roots of p {\displaystyle p} on the unit circle, with the same multiplicities. □ For a more accessible representation of the lemma, let n p − , n p 0 {\displaystyle n_{p}^{-},n_{p}^{0}} , and n p + {\displaystyle n_{p}^{+}} denote the number of roots of p {\displaystyle p} inside, on, and outside the unit circle respectively and similarly for T p {\displaystyle Tp} . Moreover let d {\displaystyle d} be the difference in degree of p {\displaystyle p} and T p {\displaystyle Tp} . Then the lemma implies that ( n p − , n p 0 , n p + ) = ( n T p − , n T p 0 , n T p + + d ) {\displaystyle (n_{p}^{-},\;n_{p}^{0},\;n_{p}^{+})=(n_{Tp}^{-},\;n_{Tp}^{0},\;n_{Tp}^{+}+d)} if δ > 0 {\displaystyle \delta >0} and ( n p − , n p 0 , n p + ) = ( n T p + + d , n T p 0 , n T p − ) {\displaystyle (n_{p}^{-},\;n_{p}^{0},\;n_{p}^{+})=(n_{Tp}^{+}+d,\;n_{Tp}^{0},\;n_{Tp}^{-})} if δ < 0 {\displaystyle \delta <0} (note the interchange of + {\displaystyle ^{+}} and − {\displaystyle ^{-}} ). Now consider the sequence of polynomials T k p {\displaystyle T^{k}p} ( k = 0 , 1 , … ) {\displaystyle (k=0,1,\ldots )} , where T 0 p = p {\displaystyle T^{0}p=p} and T k + 1 p = T ( T k p ) {\displaystyle T^{k+1}p=T(T^{k}p)} . Application of the foregoing to each pair of consecutive members of this sequence gives the following result. Theorem[Schur-Cohn test] Let p {\displaystyle p} be a complex polynomial with T p ≠ 0 {\displaystyle Tp\neq 0} and let K {\displaystyle K} be the smallest number such that T K + 1 p = 0 {\displaystyle T^{K+1}p=0} . Moreover let δ k = ( T k p ) ( 0 ) {\displaystyle \delta _{k}=(T^{k}p)(0)} for k = 1 , … , K {\displaystyle k=1,\ldots ,K} and d k = deg ⁡ T k p {\displaystyle d_{k}=\deg T^{k}p} for k = 0 , … , K {\displaystyle k=0,\ldots ,K} . All roots of p {\displaystyle p} lie inside the unit circle if and only if δ 1 < 0 {\displaystyle \delta _{1}<0} , δ k > 0 {\displaystyle \delta _{k}>0} for k = 2 , … , K {\displaystyle k=2,\ldots ,K} , and d K = 0 {\displaystyle d_{K}=0} . All roots of p {\displaystyle p} lie outside the unit circle if and only if δ k > 0 {\displaystyle \delta _{k}>0} for k = 1 , … , K {\displaystyle k=1,\ldots ,K} and d K = 0 {\displaystyle d_{K}=0} . If d K = 0 {\displaystyle d_{K}=0} and if δ k < 0 {\displaystyle \delta _{k}<0} for k = k 0 , k 1 … k m {\displaystyle k=k_{0},k_{1}\ldots k_{m}} (in increasing order) and δ k > 0 {\displaystyle \delta _{k}>0} otherwise, then p {\displaystyle p} has no roots on the unit circle and the number of roots of p {\displaystyle p} inside the unit circle is ∑ i = 0 m ( − 1 ) i d k i − 1 {\displaystyle \sum _{i=0}^{m}(-1)^{i}d_{k_{i}-1}} . More generally, the distribution of the roots of a polynomial p {\displaystyle p} with respect to an arbitrary circle in the complex plane, say one with centre c {\displaystyle c} and radius ρ {\displaystyle \rho } , can be found by application of the Schur-Cohn test to the 'shifted and scaled' polynomial q {\displaystyle q} defined by q ( z ) = p ( c + ρ z ) {\displaystyle q(z)=p(c+\rho \,z)} . Not every scaling factor is allowed, however, for the Schur-Cohn test can be applied to the polynomial q {\displaystyle q} only if none of the following equalities occur: T k q ( 0 ) = 0 {\displaystyle T^{k}q(0)=0} for some k = 1 , … K {\displaystyle k=1,\ldots K} or T K + 1 q = 0 {\displaystyle T^{K+1}q=0} while d K > 0 {\displaystyle d_{K}>0} . Now, the coefficients of the polynomials T k q {\displaystyle T^{k}q} are polynomials in ρ {\displaystyle \rho } and the said equalities result in polynomial equations for ρ {\displaystyle \rho } , which therefore hold for only finitely many values of ρ {\displaystyle \rho } . So a suitable scaling factor can always be found, even arbitrarily close to 1 {\displaystyle 1} . == Lehmer's method == Lehmers method is as follows. For a given complex polynomial p {\displaystyle p} , with the Schur-Cohn test a circular disk can be found large enough to contain all roots of p {\displaystyle p} . Next this disk can be covered with a set of overlapping smaller disks, one of them placed concentrically and the remaining ones evenly spread over the annulus yet to be covered. From this set, using the test again, disks containing no root of p {\displaystyle p} can be removed. With each of the remaining disks this procedure of covering and removal can be repeated and so any number of times, resulting in a set of arbitrarily small disks that together contain all roots of p {\displaystyle p} . The merits of the method are that it consists of repetition of a single procedure and that all roots are found simultaneously, whether they are real or complex, single, multiple or clustered. Also deflation, i.e. removal of roots already found, is not needed and every test starts with the full-precision, original polynomial. And, remarkably, this polynomial has never to be evaluated. However, the smaller the disks become, the more the coefficients of the corresponding 'scaled' polynomials will differ in relative magnitude. This may cause overflow or underflow of computer computations, thus limiting the radii of the disks from below and thereby the precision of the computed roots. . To avoid extreme scaling, or just for the sake of efficiency, one may start with testing a number of concentric disks for the number of included roots and thus reduce the region where roots occur to a number of narrow, concentric annuli. Repeating this procedure with another centre and combining the results, the said region becomes the union of intersections of such annuli. Finally, when a small disk is found that contains a single root, that root may be further approximated using other methods, e.g. Newton's method. == References ==
Wikipedia/Lehmer–Schur_algorithm
A division algorithm is an algorithm which, given two integers N and D (respectively the numerator and the denominator), computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Division algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division include restoring, non-performing restoring, non-restoring, and SRT division. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration. Newton–Raphson and Goldschmidt algorithms fall into this category. Variants of these algorithms allow using fast multiplication algorithms. It results that, for large integers, the computer time needed for a division is the same, up to a constant factor, as the time needed for a multiplication, whichever multiplication algorithm is used. Discussion will refer to the form N / D = ( Q , R ) {\displaystyle N/D=(Q,R)} , where N = numerator (dividend) D = denominator (divisor) is the input, and Q = quotient R = remainder is the output. == Division by repeated subtraction == The simplest division algorithm, historically incorporated into a greatest common divisor algorithm presented in Euclid's Elements, Book VII, Proposition 1, finds the remainder given two positive integers using only subtractions and comparisons: The proof that the quotient and remainder exist and are unique (described at Euclidean division) gives rise to a complete division algorithm, applicable to both negative and positive numbers, using additions, subtractions, and comparisons: This procedure always produces R ≥ 0. Although very simple, it takes Ω(Q) steps, and so is exponentially slower than even slow division algorithms like long division. It is useful if Q is known to be small (being an output-sensitive algorithm), and can serve as an executable specification. == Long division == Long division is the standard algorithm used for pen-and-paper division of multi-digit numbers expressed in decimal notation. It shifts gradually from the left to the right end of the dividend, subtracting the largest possible multiple of the divisor (at the digit level) at each stage; the multiples then become the digits of the quotient, and the final difference is then the remainder. When used with a binary radix, this method forms the basis for the (unsigned) integer division with remainder algorithm below. Short division is an abbreviated form of long division suitable for one-digit divisors. Chunking – also known as the partial quotients method or the hangman method – is a less-efficient form of long division which may be easier to understand. By allowing one to subtract more multiples than what one currently has at each stage, a more freeform variant of long division can be developed as well. === Integer division (unsigned) with remainder === The following algorithm, the binary version of the famous long division, will divide N by D, placing the quotient in Q and the remainder in R. In the following pseudo-code, all values are treated as unsigned integers. ==== Example ==== If we take N=11002 (1210) and D=1002 (410) Step 1: Set R=0 and Q=0 Step 2: Take i=3 (one less than the number of bits in N) Step 3: R=00 (left shifted by 1) Step 4: R=01 (setting R(0) to N(i)) Step 5: R < D, so skip statement Step 2: Set i=2 Step 3: R=010 Step 4: R=011 Step 5: R < D, statement skipped Step 2: Set i=1 Step 3: R=0110 Step 4: R=0110 Step 5: R>=D, statement entered Step 5b: R=10 (R−D) Step 5c: Q=10 (setting Q(i) to 1) Step 2: Set i=0 Step 3: R=100 Step 4: R=100 Step 5: R>=D, statement entered Step 5b: R=0 (R−D) Step 5c: Q=11 (setting Q(i) to 1) end Q=112 (310) and R=0. == Slow division methods == Slow division methods are all based on a standard recurrence equation R j + 1 = B × R j − q n − ( j + 1 ) × D , {\displaystyle R_{j+1}=B\times R_{j}-q_{n-(j+1)}\times D,} where: Rj is the j-th partial remainder of the division B is the radix (base, usually 2 internally in computers and calculators) q n − (j + 1) is the digit of the quotient in position n−(j+1), where the digit positions are numbered from least-significant 0 to most significant n−1 n is number of digits in the quotient D is the divisor === Restoring division === Restoring division operates on fixed-point fractional numbers and depends on the assumption 0 < D < N. The quotient digits q are formed from the digit set {0,1}. The basic algorithm for binary (radix 2) restoring division is: Non-performing restoring division is similar to restoring division except that the value of 2R is saved, so D does not need to be added back in for the case of R < 0. === Non-restoring division === Non-restoring division uses the digit set {−1, 1} for the quotient digits instead of {0, 1}. The algorithm is more complex, but has the advantage when implemented in hardware that there is only one decision and addition/subtraction per quotient bit; there is no restoring step after the subtraction, which potentially cuts down the numbers of operations by up to half and lets it be executed faster. The basic algorithm for binary (radix 2) non-restoring division of non-negative numbers is: Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and +1. This form needs to be converted to binary to form the final quotient. Example: If the −1 digits of Q {\displaystyle Q} are stored as zeros (0) as is common, then P {\displaystyle P} is Q {\displaystyle Q} and computing M {\displaystyle M} is trivial: perform a ones' complement (bit by bit complement) on the original Q {\displaystyle Q} . Finally, quotients computed by this algorithm are always odd, and the remainder in R is in the range −D ≤ R < D. For example, 5 / 2 = 3 R −1. To convert to a positive remainder, do a single restoring step after Q is converted from non-standard form to standard form: The actual remainder is R >> n. (As with restoring division, the low-order bits of R are used up at the same rate as bits of the quotient Q are produced, and it is common to use a single shift register for both.) === SRT division === SRT division is a popular method for division in many microprocessor implementations. The algorithm is named after D. W. Sweeney of IBM, James E. Robertson of University of Illinois, and K. D. Tocher of Imperial College London. They all developed the algorithm independently at approximately the same time (published in February 1957, September 1958, and January 1958 respectively). SRT division is similar to non-restoring division, but it uses a lookup table based on the dividend and the divisor to determine each quotient digit. The most significant difference is that a redundant representation is used for the quotient. For example, when implementing radix-4 SRT division, each quotient digit is chosen from five possibilities: { −2, −1, 0, +1, +2 }. Because of this, the choice of a quotient digit need not be perfect; later quotient digits can correct for slight errors. (For example, the quotient digit pairs (0, +2) and (1, −2) are equivalent, since 0×4+2 = 1×4−2.) This tolerance allows quotient digits to be selected using only a few most-significant bits of the dividend and divisor, rather than requiring a full-width subtraction. This simplification in turn allows a radix higher than 2 to be used. Like non-restoring division, the final steps are a final full-width subtraction to resolve the last quotient bit, and conversion of the quotient to standard binary form. The Intel Pentium processor's infamous floating-point division bug was caused by an incorrectly coded lookup table. Five of the 1066 entries had been mistakenly omitted. == Fast division methods == === Newton–Raphson division === Newton–Raphson uses Newton's method to find the reciprocal of D {\displaystyle D} and multiply that reciprocal by N {\displaystyle N} to find the final quotient Q {\displaystyle Q} . The steps of Newton–Raphson division are: Calculate an estimate X 0 {\displaystyle X_{0}} for the reciprocal 1 / D {\displaystyle 1/D} of the divisor D {\displaystyle D} . Compute successively more accurate estimates X 1 , X 2 , … , X S {\displaystyle X_{1},X_{2},\ldots ,X_{S}} of the reciprocal. This is where one employs the Newton–Raphson method as such. Compute the quotient by multiplying the dividend by the reciprocal of the divisor: Q = N X S {\displaystyle Q=NX_{S}} . In order to apply Newton's method to find the reciprocal of D {\displaystyle D} , it is necessary to find a function f ( X ) {\displaystyle f(X)} that has a zero at X = 1 / D {\displaystyle X=1/D} . The obvious such function is f ( X ) = D X − 1 {\displaystyle f(X)=DX-1} , but the Newton–Raphson iteration for this is unhelpful, since it cannot be computed without already knowing the reciprocal of D {\displaystyle D} (moreover it attempts to compute the exact reciprocal in one step, rather than allow for iterative improvements). A function that does work is f ( X ) = ( 1 / X ) − D {\displaystyle f(X)=(1/X)-D} , for which the Newton–Raphson iteration gives X i + 1 = X i − f ( X i ) f ′ ( X i ) = X i − 1 / X i − D − 1 / X i 2 = X i + X i ( 1 − D X i ) = X i ( 2 − D X i ) , {\displaystyle X_{i+1}=X_{i}-{f(X_{i}) \over f'(X_{i})}=X_{i}-{1/X_{i}-D \over -1/X_{i}^{2}}=X_{i}+X_{i}(1-DX_{i})=X_{i}(2-DX_{i}),} which can be calculated from X i {\displaystyle X_{i}} using only multiplication and subtraction, or using two fused multiply–adds. From a computation point of view, the expressions X i + 1 = X i + X i ( 1 − D X i ) {\displaystyle X_{i+1}=X_{i}+X_{i}(1-DX_{i})} and X i + 1 = X i ( 2 − D X i ) {\displaystyle X_{i+1}=X_{i}(2-DX_{i})} are not equivalent. To obtain a result with a precision of 2n bits while making use of the second expression, one must compute the product between X i {\displaystyle X_{i}} and ( 2 − D X i ) {\displaystyle (2-DX_{i})} with double the given precision of X i {\displaystyle X_{i}} (n bits). In contrast, the product between X i {\displaystyle X_{i}} and ( 1 − D X i ) {\displaystyle (1-DX_{i})} need only be computed with a precision of n bits, because the leading n bits (after the binary point) of ( 1 − D X i ) {\displaystyle (1-DX_{i})} are zeros. If the error is defined as ε i = 1 − D X i {\displaystyle \varepsilon _{i}=1-DX_{i}} , then: ε i + 1 = 1 − D X i + 1 = 1 − D ( X i ( 2 − D X i ) ) = 1 − 2 D X i + D 2 X i 2 = ( 1 − D X i ) 2 = ε i 2 . {\displaystyle {\begin{aligned}\varepsilon _{i+1}&=1-DX_{i+1}\\&=1-D(X_{i}(2-DX_{i}))\\&=1-2DX_{i}+D^{2}X_{i}^{2}\\&=(1-DX_{i})^{2}\\&={\varepsilon _{i}}^{2}.\\\end{aligned}}} This squaring of the error at each iteration step – the so-called quadratic convergence of Newton–Raphson's method – has the effect that the number of correct digits in the result roughly doubles for every iteration, a property that becomes extremely valuable when the numbers involved have many digits (e.g. in the large integer domain). But it also means that the initial convergence of the method can be comparatively slow, especially if the initial estimate X 0 {\displaystyle X_{0}} is poorly chosen. ==== Initial estimate ==== For the subproblem of choosing an initial estimate X 0 {\displaystyle X_{0}} , it is convenient to apply a bit-shift to the divisor D to scale it so that 0.5 ≤ D ≤ 1. Applying the same bit-shift to the numerator N ensures the quotient does not change. Once within a bounded range, a simple polynomial approximation can be used to find an initial estimate. The linear approximation with minimum worst-case absolute error on the interval [ 0.5 , 1 ] {\displaystyle [0.5,1]} is: X 0 = 48 17 − 32 17 D . {\displaystyle X_{0}={48 \over 17}-{32 \over 17}D.} The coefficients of the linear approximation T 0 + T 1 D {\displaystyle T_{0}+T_{1}D} are determined as follows. The absolute value of the error is | ε 0 | = | 1 − D ( T 0 + T 1 D ) | {\displaystyle |\varepsilon _{0}|=|1-D(T_{0}+T_{1}D)|} . The minimum of the maximum absolute value of the error is determined by the Chebyshev equioscillation theorem applied to F ( D ) = 1 − D ( T 0 + T 1 D ) {\displaystyle F(D)=1-D(T_{0}+T_{1}D)} . The local minimum of F ( D ) {\displaystyle F(D)} occurs when F ′ ( D ) = 0 {\displaystyle F'(D)=0} , which has solution D = − T 0 / ( 2 T 1 ) {\displaystyle D=-T_{0}/(2T_{1})} . The function at that minimum must be of opposite sign as the function at the endpoints, namely, F ( 1 / 2 ) = F ( 1 ) = − F ( − T 0 / ( 2 T 1 ) ) {\displaystyle F(1/2)=F(1)=-F(-T_{0}/(2T_{1}))} . The two equations in the two unknowns have a unique solution T 0 = 48 / 17 {\displaystyle T_{0}=48/17} and T 1 = − 32 / 17 {\displaystyle T_{1}=-32/17} , and the maximum error is F ( 1 ) = 1 / 17 {\displaystyle F(1)=1/17} . Using this approximation, the absolute value of the error of the initial value is less than | ε 0 | ≤ 1 17 ≈ 0.059. {\displaystyle \vert \varepsilon _{0}\vert \leq {1 \over 17}\approx 0.059.} The best quadratic fit to 1 / D {\displaystyle 1/D} in the interval is X := 140 33 − 64 11 D + 256 99 D 2 . {\displaystyle X:={\frac {140}{33}}-{\frac {64}{11}}D+{\frac {256}{99}}D^{2}.} It is chosen to make the error equal to a re-scaled third order Chebyshev polynomial of the first kind, and gives an absolute value of the error less than or equal to 1/99. This improvement is equivalent to log 2 ⁡ ( log ⁡ 99 / log ⁡ 17 ) ≈ 0.7 {\displaystyle \log _{2}(\log 99/\log 17)\approx 0.7} Newton–Raphson iterations, at a computational cost of less than one iteration. It is possible to generate a polynomial fit of degree larger than 2, computing the coefficients using the Remez algorithm. The trade-off is that the initial guess requires more computational cycles but hopefully in exchange for fewer iterations of Newton–Raphson. Since for this method the convergence is exactly quadratic, it follows that, from an initial error ε 0 {\displaystyle \varepsilon _{0}} , S {\displaystyle S} iterations will give an answer accurate to P = − 2 S log 2 ⁡ ε 0 − 1 = 2 S log 2 ⁡ ( 1 / ε 0 ) − 1 {\displaystyle P=-2^{S}\log _{2}\varepsilon _{0}-1=2^{S}\log _{2}(1/\varepsilon _{0})-1} binary places. Typical values are: A quadratic initial estimate plus two iterations is accurate enough for IEEE single precision, but three iterations are marginal for double precision. A linear initial estimate plus four iterations is sufficient for both double and double extended formats. ==== Pseudocode ==== The following computes the quotient of N and D with a precision of P binary places: Express D as M × 2e where 1 ≤ M < 2 (standard floating point representation) D' := D / 2e+1 // scale between 0.5 and 1, can be performed with bit shift / exponent subtraction N' := N / 2e+1 X := 48/17 − 32/17 × D' // precompute constants with same precision as D repeat ⌈ log 2 ⁡ P + 1 log 2 ⁡ 17 ⌉ {\displaystyle \left\lceil \log _{2}{\frac {P+1}{\log _{2}17}}\right\rceil \,} times // can be precomputed based on fixed P X := X + X × (1 - D' × X) end return N' × X For example, for a double-precision floating-point division, this method uses 10 multiplies, 9 adds, and 2 shifts. ==== Cubic iteration ==== There is an iteration which uses three multiplications to cube the error: ε i = 1 − D X i {\displaystyle \varepsilon _{i}=1-DX_{i}} Y i = X i ε i {\displaystyle Y_{i}=X_{i}\varepsilon _{i}} X i + 1 = X i + Y i + Y i ε i . {\displaystyle X_{i+1}=X_{i}+Y_{i}+Y_{i}\varepsilon _{i}.} The Yiεi term is new. Expanding out the above, X i + 1 {\displaystyle X_{i+1}} can be written as X i + 1 = X i + X i ε i + X i ε i 2 = X i + X i ( 1 − D X i ) + X i ( 1 − D X i ) 2 = 3 X i − 3 D X i 2 + D 2 X i 3 , {\displaystyle {\begin{aligned}X_{i+1}&=X_{i}+X_{i}\varepsilon _{i}+X_{i}\varepsilon _{i}^{2}\\&=X_{i}+X_{i}(1-DX_{i})+X_{i}(1-DX_{i})^{2}\\&=3X_{i}-3DX_{i}^{2}+D^{2}X_{i}^{3},\end{aligned}}} with the result that the error term ε i + 1 = 1 − D X i + 1 = 1 − 3 D X i + 3 D 2 X i 2 − D 3 X i 3 = ( 1 − D X i ) 3 = ε i 3 . {\displaystyle {\begin{aligned}\varepsilon _{i+1}&=1-DX_{i+1}\\&=1-3DX_{i}+3D^{2}X_{i}^{2}-D^{3}X_{i}^{3}\\&=(1-DX_{i})^{3}\\&=\varepsilon _{i}^{3}.\end{aligned}}} This is 3/2 the computation of the quadratic iteration, but achieves log ⁡ 3 / log ⁡ 2 ≈ 1.585 {\displaystyle \log 3/\log 2\approx 1.585} as much convergence, so is slightly more efficient. Put another way, two iterations of this method raise the error to the ninth power at the same computational cost as three quadratic iterations, which only raise the error to the eighth power. The number of correct bits after S {\displaystyle S} iterations is P = − 3 S log 2 ⁡ ε 0 − 1 = 3 S log 2 ⁡ ( 1 / ε 0 ) − 1 {\displaystyle P=-3^{S}\log _{2}\varepsilon _{0}-1=3^{S}\log _{2}(1/\varepsilon _{0})-1} binary places. Typical values are: A quadratic initial estimate plus two cubic iterations provides ample precision for an IEEE double-precision result. It is also possible to use a mixture of quadratic and cubic iterations. Using at least one quadratic iteration ensures that the error is positive, i.e. the reciprocal is underestimated.: 370  This can simplify a following rounding step if an exactly-rounded quotient is required. Using higher degree polynomials in either the initialization or the iteration results in a degradation of performance because the extra multiplications required would be better spent on doing more iterations. === Goldschmidt division === Goldschmidt division (after Robert Elliott Goldschmidt) uses an iterative process of repeatedly multiplying both the dividend and divisor by a common factor Fi, chosen such that the divisor converges to 1. This causes the dividend to converge to the sought quotient Q: Q = N D F 1 F 1 F 2 F 2 F … F … . {\displaystyle Q={\frac {N}{D}}{\frac {F_{1}}{F_{1}}}{\frac {F_{2}}{F_{2}}}{\frac {F_{\ldots }}{F_{\ldots }}}.} The steps for Goldschmidt division are: Generate an estimate for the multiplication factor Fi . Multiply the dividend and divisor by Fi . If the divisor is sufficiently close to 1, return the dividend, otherwise, loop to step 1. Assuming N/D has been scaled so that 0 < D < 1, each Fi is based on D: F i + 1 = 2 − D i . {\displaystyle F_{i+1}=2-D_{i}.} Multiplying the dividend and divisor by the factor yields: N i + 1 D i + 1 = N i D i F i + 1 F i + 1 . {\displaystyle {\frac {N_{i+1}}{D_{i+1}}}={\frac {N_{i}}{D_{i}}}{\frac {F_{i+1}}{F_{i+1}}}.} After a sufficient number k of iterations Q = N k {\displaystyle Q=N_{k}} . The Goldschmidt method is used in AMD Athlon CPUs and later models. It is also known as Anderson Earle Goldschmidt Powers (AEGP) algorithm and is implemented by various IBM processors. Although it converges at the same rate as a Newton–Raphson implementation, one advantage of the Goldschmidt method is that the multiplications in the numerator and in the denominator can be done in parallel. ==== Binomial theorem ==== The Goldschmidt method can be used with factors that allow simplifications by the binomial theorem. Assume ⁠ N / D {\displaystyle N/D} ⁠ has been scaled by a power of two such that D ∈ ( 1 2 , 1 ] {\displaystyle D\in \left({\tfrac {1}{2}},1\right]} . We choose D = 1 − x {\displaystyle D=1-x} and F i = 1 + x 2 i {\displaystyle F_{i}=1+x^{2^{i}}} . This yields N 1 − x = N ⋅ ( 1 + x ) 1 − x 2 = N ⋅ ( 1 + x ) ⋅ ( 1 + x 2 ) 1 − x 4 = ⋯ = Q ′ = N ′ = N ⋅ ( 1 + x ) ⋅ ( 1 + x 2 ) ⋅ ⋅ ⋅ ( 1 + x 2 ( n − 1 ) ) D ′ = 1 − x 2 n ≈ 1 {\displaystyle {\frac {N}{1-x}}={\frac {N\cdot (1+x)}{1-x^{2}}}={\frac {N\cdot (1+x)\cdot (1+x^{2})}{1-x^{4}}}=\cdots =Q'={\frac {N'=N\cdot (1+x)\cdot (1+x^{2})\cdot \cdot \cdot (1+x^{2^{(n-1)}})}{D'=1-x^{2^{n}}\approx 1}}} . After n steps ( x ∈ [ 0 , 1 2 ) ) {\displaystyle \left(x\in \left[0,{\tfrac {1}{2}}\right)\right)} , the denominator 1 − x 2 n {\displaystyle 1-x^{2^{n}}} can be rounded to 1 with a relative error ε n = Q ′ − N ′ Q ′ = x 2 n {\displaystyle \varepsilon _{n}={\frac {Q'-N'}{Q'}}=x^{2^{n}}} which is maximum at 2 − 2 n {\displaystyle 2^{-2^{n}}} when x = 1 2 {\displaystyle x={\tfrac {1}{2}}} , thus providing a minimum precision of 2 n {\displaystyle 2^{n}} binary digits. == Large-integer methods == Methods designed for hardware implementation generally do not scale to integers with thousands or millions of decimal digits; these frequently occur, for example, in modular reductions in cryptography. For these large integers, more efficient division algorithms transform the problem to use a small number of multiplications, which can then be done using an asymptotically efficient multiplication algorithm such as the Karatsuba algorithm, Toom–Cook multiplication or the Schönhage–Strassen algorithm. The result is that the computational complexity of the division is of the same order (up to a multiplicative constant) as that of the multiplication. Examples include reduction to multiplication by Newton's method as described above, as well as the slightly faster Burnikel-Ziegler division, Barrett reduction and Montgomery reduction algorithms. Newton's method is particularly efficient in scenarios where one must divide by the same divisor many times, since after the initial Newton inversion only one (truncated) multiplication is needed for each division. == Division by a constant == The division by a constant D is equivalent to the multiplication by its reciprocal. Since the denominator is constant, so is its reciprocal (1/D). Thus it is possible to compute the value of (1/D) once at compile time, and at run time perform the multiplication N·(1/D) rather than the division N/D. In floating-point arithmetic the use of (1/D) presents little problem, but in integer arithmetic the reciprocal will always evaluate to zero (assuming |D| > 1). It is not necessary to use specifically (1/D); any value (X/Y) that reduces to (1/D) may be used. For example, for division by 3, the factors 1/3, 2/6, 3/9, or 194/582 could be used. Consequently, if Y were a power of two the division step would reduce to a fast right bit shift. The effect of calculating N/D as (N·X)/Y replaces a division with a multiply and a shift. Note that the parentheses are important, as N·(X/Y) will evaluate to zero. However, unless D itself is a power of two, there is no X and Y that satisfies the conditions above. Fortunately, (N·X)/Y gives exactly the same result as N/D in integer arithmetic even when (X/Y) is not exactly equal to 1/D, but "close enough" that the error introduced by the approximation is in the bits that are discarded by the shift operation. Barrett reduction uses powers of 2 for the value of Y to make division by Y a simple right shift. As a concrete fixed-point arithmetic example, for 32-bit unsigned integers, division by 3 can be replaced with a multiply by ⁠2863311531/233⁠, a multiplication by 2863311531 (hexadecimal 0xAAAAAAAB) followed by a 33 right bit shift. The value of 2863311531 is calculated as ⁠233/3⁠, then rounded up. Likewise, division by 10 can be expressed as a multiplication by 3435973837 (0xCCCCCCCD) followed by division by 235 (or 35 right bit shift).: p230-234  OEIS provides sequences of the constants for multiplication as A346495 and for the right shift as A346496. For general x-bit unsigned integer division where the divisor D is not a power of 2, the following identity converts the division into two x-bit addition/subtraction, one x-bit by x-bit multiplication (where only the upper half of the result is used) and several shifts, after precomputing k = x + ⌈ log 2 ⁡ D ⌉ {\displaystyle k=x+\lceil \log _{2}{D}\rceil } and a = ⌈ 2 k D ⌉ − 2 x {\displaystyle a=\left\lceil {\frac {2^{k}}{D}}\right\rceil -2^{x}} : ⌊ N D ⌋ = ⌊ ⌊ N − b 2 ⌋ + b 2 k − x − 1 ⌋ where b = ⌊ N a 2 x ⌋ {\displaystyle \left\lfloor {\frac {N}{D}}\right\rfloor =\left\lfloor {\frac {\left\lfloor {\frac {N-b}{2}}\right\rfloor +b}{2^{k-x-1}}}\right\rfloor {\text{ where }}b=\left\lfloor {\frac {Na}{2^{x}}}\right\rfloor } In some cases, division by a constant can be accomplished in even less time by converting the "multiply by a constant" into a series of shifts and adds or subtracts. Of particular interest is division by 10, for which the exact quotient is obtained, with remainder if required. == Rounding error == When a division operation is performed, the exact quotient q {\displaystyle q} and remainder r {\displaystyle r} are approximated to fit within the computer’s precision limits. The Division Algorithm states: [ a = b q + r ] {\displaystyle [a=bq+r]} where 0 ≤ r < | b | {\displaystyle 0\leq r<|b|} . In floating-point arithmetic, the quotient q {\displaystyle q} is represented as q ~ {\displaystyle {\tilde {q}}} and the remainder r {\displaystyle r} as r ~ {\displaystyle {\tilde {r}}} , introducing rounding errors ϵ q {\displaystyle \epsilon _{q}} ϵ q {\displaystyle \epsilon _{q}} and ϵ r {\displaystyle \epsilon _{r}} : [ q ~ = q + ϵ q ] [ r ~ = r + ϵ r ] {\displaystyle [{\tilde {q}}=q+\epsilon _{q}][{\tilde {r}}=r+\epsilon _{r}]} This rounding causes a small error, which can propagate and accumulate through subsequent calculations. Such errors are particularly pronounced in iterative processes and when subtracting nearly equal values - is told loss of significance. To mitigate these errors, techniques such as the use of guard digits or higher precision arithmetic are employed. == See also == Galley division Multiplication algorithm Pentium FDIV bug == Notes == == References == == Further reading == Savard, John J. G. (2018) [2006]. "Advanced Arithmetic Techniques". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16.
Wikipedia/Division_algorithm
Square root algorithms compute the non-negative square root S {\displaystyle {\sqrt {S}}} of a positive real number S {\displaystyle S} . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these algorithms typically construct a series of increasingly accurate approximations. Most square root computation methods are iterative: after choosing a suitable initial estimate of S {\displaystyle {\sqrt {S}}} , an iterative refinement is performed until some termination criterion is met. One refinement scheme is Heron's method, a special case of Newton's method. If division is much more costly than multiplication, it may be preferable to compute the inverse square root instead. Other methods are available to compute the square root digit by digit, or using Taylor series. Rational approximations of square roots may be calculated using continued fraction expansions. The method employed depends on the needed accuracy, and the available tools and computational power. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an integer square root is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). == History == Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Babylonian mathematicians calculated the square root of 2 to three sexagesimal "digits" after the 1, but it is not known exactly how. They knew how to approximate a hypotenuse using a 2 + b 2 ≈ a + b 2 2 a {\displaystyle {\sqrt {a^{2}+b^{2}}}\approx a+{\frac {b^{2}}{2a}}} (giving for example 41 60 + 15 3600 {\displaystyle {\frac {41}{60}}+{\frac {15}{3600}}} for the diagonal of a gate whose height is 40 60 {\displaystyle {\frac {40}{60}}} rods and whose width is 10 60 {\displaystyle {\frac {10}{60}}} rods) and they may have used a similar approach for finding the approximation of 2 . {\displaystyle {\sqrt {2}}.} Heron's method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures. == Initial estimate == Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and S {\displaystyle S} , the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with x 0 = 1 {\displaystyle x_{0}=1} (or S {\displaystyle S} ), then approximately 1 2 | log 2 ⁡ S | {\displaystyle {\tfrac {1}{2}}\vert \log _{2}S\vert } iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method, a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as [ x 0 , S / x 0 ] {\displaystyle [x_{0},S/x_{0}]} ). The estimate is a specific value of a functional approximation to f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to f ( x ) {\displaystyle f(x)} . The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation. === Decimal estimates === Typically the number S {\displaystyle S} is expressed in scientific notation as a × 10 2 n {\displaystyle a\times 10^{2n}} where 1 ≤ a < 100 {\displaystyle 1\leq a<100} and n is an integer, and the range of possible square roots is a × 10 n {\displaystyle {\sqrt {a}}\times 10^{n}} where 1 ≤ a < 10 {\displaystyle 1\leq {\sqrt {a}}<10} . ==== Scalar estimates ==== Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean ( 10 ≈ 3.16 {\displaystyle {\sqrt {10}}\approx 3.16} ) times 10 n {\displaystyle 10^{n}} are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square root S = a × 10 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 10^{n}} can be estimated as S ≈ { 2 ⋅ 10 n if a < 10 , 6 ⋅ 10 n if a ≥ 10. {\displaystyle {\sqrt {S}}\approx {\begin{cases}2\cdot 10^{n}&{\text{if }}a<10,\\6\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} This estimate has maximum absolute error of 4 ⋅ 10 n {\displaystyle 4\cdot 10^{n}} at a = 100, and maximum relative error of 100% at a = 1. For example, for S = 125348 {\displaystyle S=125348} factored as 12.5348 × 10 4 {\displaystyle 12.5348\times 10^{4}} , the estimate is S ≈ 6 ⋅ 10 2 = 600 {\displaystyle {\sqrt {S}}\approx 6\cdot 10^{2}=600} . 125348 = 354.0 {\displaystyle {\sqrt {125348}}=354.0} , an absolute error of 246 and relative error of almost 70%. ==== Linear estimates ==== A better estimate, and the standard method used, is a linear approximation to the function y = x 2 {\displaystyle y=x^{2}} over a small arc. If, as above, powers of the base are factored out of the number S {\displaystyle S} and the interval reduced to [ 1 , 100 ] {\displaystyle [1,100]} , a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is y = 8.7 x − 10 {\displaystyle y=8.7x-10} . Reordering, x = 0.115 y + 1.15 {\displaystyle x=0.115y+1.15} . Rounding the coefficients for ease of computation, S ≈ ( a / 10 + 1.2 ) ⋅ 10 n {\displaystyle {\sqrt {S}}\approx (a/10+1.2)\cdot 10^{n}} That is the best estimate on average that can be achieved with a single piece linear approximation of the function y=x2 in the interval [ 1 , 100 ] {\displaystyle [1,100]} . It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10. To divide by 10, subtract one from the exponent of a {\displaystyle a} , or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range [ 1 , 100 ] {\displaystyle [1,100]} is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc from y = 1 to y = 100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1×100, i.e. [1,2√100] and [2√100,100]. For three intervals, the bounds are the cube roots of 100: [1,3√100], [3√100,(3√100)2], and [(3√100)2,100], etc. For two intervals, 2√100 = 10, a very convenient number. Tangent lines are easy to derive, and are located at x = √1*√10 and x = √10*√10. Their equations are: y = 3.56 x − 3.16 {\displaystyle y=3.56x-3.16} and y = 11.2 x − 31.6 {\displaystyle y=11.2x-31.6} . Inverting, the square roots are: x = 0.28 y + 0.89 {\displaystyle x=0.28y+0.89} and x = .089 y + 2.8 {\displaystyle x=.089y+2.8} . Thus for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} : S ≈ { ( 0.28 a + 0.89 ) ⋅ 10 n if a < 10 , ( .089 a + 2.8 ) ⋅ 10 n if a ≥ 10. {\displaystyle {\sqrt {S}}\approx {\begin{cases}(0.28a+0.89)\cdot 10^{n}&{\text{if }}a<10,\\(.089a+2.8)\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy. ==== Hyperbolic estimates ==== In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval [ 1 , 100 ] {\displaystyle [1,100]} is y = 190/(10−x) − 20. Transposing, the square root is x = 10 − 190/(y+20). Thus for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} : S ≈ ( 10 − 190 a + 20 ) ⋅ 10 n {\displaystyle {\sqrt {S}}\approx \left(10-{\frac {190}{a+20}}\right)\cdot 10^{n}} The division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. This hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at a = 100 and maximum relative error at a = 10, where the estimate of 3.67 is 16.0% higher than the root of 3.16. If instead one performed Newton-Raphson iterations beginning with an estimate of 10, it would take two iterations to get to 3.66, matching the hyperbolic estimate. For a more typical case like 75, the hyperbolic estimate of 8.00 is only 7.6% low, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result. ==== Arithmetic estimates ==== A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals halfway between the squares. So any number between 25 and halfway to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6. The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: The final operation is to multiply the estimate k by the power of ten divided by 2, so for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} , S ≈ k ⋅ 10 n {\displaystyle {\sqrt {S}}\approx k\cdot 10^{n}} The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If k 2 ≤ a < ( k + 1 ) 2 {\displaystyle k^{2}\leq a<(k+1)^{2}} , then a {\displaystyle {\sqrt {a}}} is approximately k plus a fraction, the difference between a and k2 divided by the difference between the two squares: a ≈ k + R {\displaystyle {\sqrt {a}}\approx k+R} where R = ( a − k 2 ) ( k + 1 ) 2 − k 2 {\displaystyle R={\frac {(a-k^{2})}{(k+1)^{2}-k^{2}}}} The final operation, as above, is to multiply the result by the power of ten divided by 2; S = a ⋅ 10 n ≈ ( k + R ) ⋅ 10 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\cdot 10^{n}\approx (k+R)\cdot 10^{n}} k is a decimal digit and R is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75. 75 = 75 × 102 · 0, so a is 75 and n is 0. From the multiplication tables, the square root of the mantissa must be 8 point something because a is between 8×8 = 64 and 9×9 = 81, so k is 8; something is the decimal representation of R. The fraction R is 75 − k2 = 11, the numerator, and 81 − k2 = 17, the denominator. 11/17 is a little less than 12/18 = 2/3 = .67, so guess .66 (it's okay to guess here, the error is very small). The final estimate is 8 + .66 = 8.66. √75 to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close. === Binary estimates === When working in the binary numeral system (as computers do internally), by expressing S {\displaystyle S} as a × 2 2 n {\displaystyle a\times 2^{2n}} where 0.1 2 ≤ a < 10 2 {\displaystyle 0.1_{2}\leq a<10_{2}} , the square root S = a × 2 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 2^{n}} can be estimated as S ≈ ( 0.485 + 0.485 ⋅ a ) ⋅ 2 n {\displaystyle {\sqrt {S}}\approx (0.485+0.485\cdot a)\cdot 2^{n}} which is the least-squares regression line to 3 significant digit coefficients. a {\displaystyle {\sqrt {a}}} has maximum absolute error of 0.0408 at a {\displaystyle a} =2, and maximum relative error of 3.0% at a {\displaystyle a} =1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: S ≈ ( 0.5 + 0.5 ⋅ a ) ⋅ 2 n {\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{n}} which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at a = 0.5 and a = 2.0. For S = 125348 = 1 1110 1001 1010 0100 2 = 1.1110 1001 1010 0100 2 × 2 16 {\displaystyle S=125348=1\;1110\;1001\;1010\;0100_{2}=1.1110\;1001\;1010\;0100_{2}\times 2^{16}\,} , the binary approximation gives S ≈ ( 0.5 + 0.5 ⋅ a ) ⋅ 2 8 = 1.0111 0100 1101 0010 2 ⋅ 1 0000 0000 2 = 1.456 ⋅ 256 = 372.8 {\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{8}=1.0111\;0100\;1101\;0010_{2}\cdot 1\;0000\;0000_{2}=1.456\cdot 256=372.8} . 125348 = 354.0 {\displaystyle {\sqrt {125348}}=354.0} , so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate for a {\displaystyle a} good to 8 bits can be obtained by table lookup on the high 8 bits of a {\displaystyle a} , remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits). == Heron's method == The first explicit algorithm for approximating S {\displaystyle \ {\sqrt {S~}}\ } is known as Heron's method, after the first-century Greek mathematician Hero of Alexandria who described the method in his AD 60 work Metrica. This method is also called the Babylonian method (not to be confused with the Babylonian method for approximating hypotenuses), although there is no evidence that the method was known to Babylonians. Given a positive real number S {\displaystyle S} , let x0 > 0 be any positive initial estimate. Heron's method consists in iteratively computing x n + 1 = 1 2 ( x n + S x n ) , {\displaystyle x_{n+1}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right),} until the desired accuracy is achieved. The sequence ( x 0 , x 1 , x 2 , x 3 , … ) {\displaystyle \ {\bigl (}\ x_{0},\ x_{1},\ x_{2},\ x_{3},\ \ldots \ {\bigr )}\ } defined by this equation converges to lim n → ∞ x n = S . {\displaystyle \ \lim _{n\to \infty }x_{n}={\sqrt {S~}}~.} This is equivalent to using Newton's method to solve x 2 − S = 0 {\displaystyle x^{2}-S=0} . This algorithm is quadratically convergent: the number of correct digits of x n {\displaystyle x_{n}} roughly doubles with each iteration. === Derivation === The basic idea is that if x {\displaystyle \ x\ } is an overestimate to the square root of a non-negative real number S {\displaystyle \ S\ } then S x {\displaystyle \ {\tfrac {\ S\ }{x}}\ } will be an underestimate, and vice versa, so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence). More precisely, if x {\displaystyle \ x\ } is our initial guess of S {\displaystyle \ {\sqrt {S~}}\ } and ε {\displaystyle \ \varepsilon \ } is the error in our estimate such that S = ( x + ε ) 2 , {\displaystyle \ S=\left(x+\varepsilon \right)^{2}\ ,} then we can expand the binomial as: ( x + ε ) 2 = x 2 + 2 x ε + ε 2 {\displaystyle \ {\bigl (}\ x+\varepsilon \ {\bigr )}^{2}=x^{2}+2x\varepsilon +\varepsilon ^{2}} and solve for the error term ε = S − x 2 2 x + ε ≈ S − x 2 2 x , {\displaystyle \varepsilon ={\frac {\ S-x^{2}\ }{\ 2x+\varepsilon \ }}\approx {\frac {\ S-x^{2}\ }{2x}}\ ,} if we suppose that ε ≪ x {\displaystyle \ \varepsilon \ll x~} Therefore, we can compensate for the error and update our old estimate as x + ε ≈ x + S − x 2 2 x = S + x 2 2 x = S x + x 2 ≡ x r e v i s e d . {\displaystyle \ x+\varepsilon \ \approx \ x+{\frac {\ S-x^{2}\ }{2x}}\ =\ {\frac {\ S+x^{2}\ }{2x}}\ =\ {\frac {\ {\frac {S}{\ x\ }}+x\ }{2}}\ \equiv \ x_{\mathsf {revised}}~.} Since the computed error was not exact, this is not the actual answer, but becomes our new guess to use in the next round of correction. The process of updating is iterated until desired accuracy is obtained. This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics. === Example === To calculate S {\displaystyle {\sqrt {S\,}}} for S = 125348 {\displaystyle S=125348} to seven significant figures, use the rough estimation method above to get x 0 = 6 ⋅ 10 2 = 600 x 1 = 1 2 ( x 0 + S x 0 ) = 1 2 ( 600 .1 + 125348 600 ) = 404.457 ≈ 400 x 2 = 1 2 ( x 1 + S x 1 ) = 1 2 ( 400 .1 + 125348 400 ) = 356.685 ≈ 360 x 3 = 1 2 ( x 2 + S x 2 ) = 1 2 ( 360 .1 + 125348 360 ) = 354.094 ≈ 354.1 x 4 = 1 2 ( x 3 + S x 3 ) = 1 2 ( 354.1 + 125348 354.1 ) = 354.045199 {\displaystyle {\begin{alignedat}{5}x_{0}&=6\cdot 10^{2}&&&&=600\\[0.3em]x_{1}&={\frac {1}{2}}\left(x_{0}+{\frac {S}{x_{0}}}\right)&&={\frac {1}{2}}\left(600{\phantom {.1}}+{\frac {125348}{600}}\right)&&=404.457\approx 400\\[0.3em]x_{2}&={\frac {1}{2}}\left(x_{1}+{\frac {S}{x_{1}}}\right)&&={\frac {1}{2}}\left(400{\phantom {.1}}+{\frac {125348}{400}}\right)&&=356.685\approx 360\\[0.3em]x_{3}&={\frac {1}{2}}\left(x_{2}+{\frac {S}{x_{2}}}\right)&&={\frac {1}{2}}\left(360{\phantom {.1}}+{\frac {125348}{360}}\right)&&=354.094\approx 354.1\\[0.3em]x_{4}&={\frac {1}{2}}\left(x_{3}+{\frac {S}{x_{3}}}\right)&&={\frac {1}{2}}\left(354.1+{\frac {125348}{354.1}}\right)&&=354.045199\end{alignedat}}} Therefore 125348 ≈ 354.0452 {\displaystyle {\sqrt {\,125348\,}}\approx 354.0452} to seven significant figures. (The true value is 354.0451948551....) Notice that early iterations only needed to be computed to 1, 2 or 4 places to produce an accurate final answer. === Convergence === Suppose that x 0 > 0 a n d S > 0 . {\displaystyle \ x_{0}>0~~{\mathsf {and}}~~S>0~.} Then for any natural number n : x n > 0 . {\displaystyle \ n:x_{n}>0~.} Let the relative error in x n {\displaystyle \ x_{n}\ } be defined by ε n = x n S − 1 > − 1 {\displaystyle \ \varepsilon _{n}={\frac {~x_{n}\ }{\ {\sqrt {S~}}\ }}-1>-1\ } and thus x n = S ⋅ ( 1 + ε n ) . {\displaystyle \ x_{n}={\sqrt {S~}}\cdot \left(1+\varepsilon _{n}\right)~.} Then it can be shown that ε n + 1 = ε n 2 2 ( 1 + ε n ) ≥ 0 . {\displaystyle \ \varepsilon _{n+1}={\frac {\varepsilon _{n}^{2}}{2(1+\varepsilon _{n})}}\geq 0~.} And thus that ε n + 2 ≤ min { ε n + 1 2 2 , ε n + 1 2 } {\displaystyle \ \varepsilon _{n+2}\leq \min \left\{\ {\frac {\ \varepsilon _{n+1}^{2}\ }{2}},{\frac {\ \varepsilon _{n+1}\ }{2}}\ \right\}\ } and consequently that convergence is assured, and quadratic. ==== Worst case for convergence ==== If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows: S = 1 ; x 0 = 2 ; x 1 = 1.250 ; ε 1 = 0.250 . S = 10 ; x 0 = 2 ; x 1 = 3.500 ; ε 1 < 0.107 . S = 10 ; x 0 = 6 ; x 1 = 3.833 ; ε 1 < 0.213 . S = 100 ; x 0 = 6 ; x 1 = 11.333 ; ε 1 < 0.134 . {\displaystyle {\begin{aligned}S&=\ 1\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 1.250\ ;&\varepsilon _{1}&=\ 0.250~.\\S&=\ 10\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 3.500\ ;&\varepsilon _{1}&<\ 0.107~.\\S&=\ 10\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 3.833\ ;&\varepsilon _{1}&<\ 0.213~.\\S&=\ 100\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 11.333\ ;&\varepsilon _{1}&<\ 0.134~.\end{aligned}}} Thus in any case, ε 1 ≤ 2 − 2 . ε 2 < 2 − 5 < 10 − 1 . ε 3 < 2 − 11 < 10 − 3 . ε 4 < 2 − 23 < 10 − 6 . ε 5 < 2 − 47 < 10 − 14 . ε 6 < 2 − 95 < 10 − 28 . ε 7 < 2 − 191 < 10 − 57 . ε 8 < 2 − 383 < 10 − 115 . {\displaystyle {\begin{aligned}\varepsilon _{1}&\leq 2^{-2}.\\\varepsilon _{2}&<2^{-5}<10^{-1}~.\\\varepsilon _{3}&<2^{-11}<10^{-3}~.\\\varepsilon _{4}&<2^{-23}<10^{-6}~.\\\varepsilon _{5}&<2^{-47}<10^{-14}~.\\\varepsilon _{6}&<2^{-95}<10^{-28}~.\\\varepsilon _{7}&<2^{-191}<10^{-57}~.\\\varepsilon _{8}&<2^{-383}<10^{-115}~.\end{aligned}}} Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the x n {\displaystyle \ x_{n}\ } being calculated, to avoid significant round-off error. == Bakhshali method == This method for finding an approximation to a square root was described in an Ancient Indian manuscript, called the Bakhshali manuscript. It is algebraically equivalent to two iterations of Heron's method and thus quartically convergent, meaning that the number of correct digits of the approximation roughly quadruples with each iteration. The original presentation, using modern notation, is as follows: To calculate S {\displaystyle {\sqrt {S}}} , let x 0 2 {\displaystyle x_{0}^{2}} be the initial approximation to S {\displaystyle S} . Then, successively iterate as: a n = S − x n 2 2 x n , x n + 1 = x n + a n , x n + 2 = x n + 1 − a n 2 2 x n + 1 . {\displaystyle {\begin{aligned}a_{n}&={\frac {S-x_{n}^{2}}{2x_{n}}},\\x_{n+1}&=x_{n}+a_{n},\\x_{n+2}&=x_{n+1}-{\frac {a_{n}^{2}}{2x_{n+1}}}.\end{aligned}}} The values x n + 1 {\displaystyle x_{n+1}} and x n + 2 {\displaystyle x_{n+2}} are exactly the same as those computed by Heron's method. To see this, the second Heron's method step would compute x n + 2 = x n + 1 2 + S 2 x n + 1 = x n + 1 + S − x n + 1 2 2 x n + 1 {\displaystyle x_{n+2}={\frac {x_{n+1}^{2}+S}{2x_{n+1}}}=x_{n+1}+{\frac {S-x_{n+1}^{2}}{2x_{n+1}}}} and we can use the definitions of x n + 1 {\displaystyle x_{n+1}} and a n {\displaystyle a_{n}} to rearrange the numerator into: S − x n + 1 2 = S − ( x n + a n ) 2 = S − x n 2 − 2 x n a n − a n 2 = S − x n 2 − ( S − x n 2 ) − a n 2 = − a n 2 . {\displaystyle {\begin{aligned}S-x_{n+1}^{2}&=S-(x_{n}+a_{n})^{2}\\&=S-x_{n}^{2}-2x_{n}a_{n}-a_{n}^{2}\\&=S-x_{n}^{2}-(S-x_{n}^{2})-a_{n}^{2}\\&=-a_{n}^{2}.\end{aligned}}} This can be used to construct a rational approximation to the square root by beginning with an integer. If x 0 = N {\displaystyle x_{0}=N} is an integer chosen so N 2 {\displaystyle N^{2}} is close to S {\displaystyle S} , and d = S − N 2 {\displaystyle d=S-N^{2}} is the difference whose absolute value is minimized, then the first iteration can be written as: S ≈ N + d 2 N − d 2 8 N 3 + 4 N d = 8 N 4 + 8 N 2 d + d 2 8 N 3 + 4 N d = N 4 + 6 N 2 S + S 2 4 N 3 + 4 N S = N 2 ( N 2 + 6 S ) + S 2 4 N ( N 2 + S ) . {\displaystyle {\sqrt {S}}\approx N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}+4Nd}}={\frac {8N^{4}+8N^{2}d+d^{2}}{8N^{3}+4Nd}}={\frac {N^{4}+6N^{2}S+S^{2}}{4N^{3}+4NS}}={\frac {N^{2}(N^{2}+6S)+S^{2}}{4N(N^{2}+S)}}.} The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots. One might think the second half of the Bakhshali method could be used as a simpler form of Heron's iteration and used repeatedly, e.g. a n + 1 = − a n 2 2 x n + 1 , x n + 2 = x n + 1 + a n + 1 , a n + 2 = − a n + 1 2 2 x n + 2 , x n + 3 = x n + 2 + a n + 2 , etc. {\displaystyle {\begin{aligned}a_{n+1}&={\frac {-a_{n}^{2}}{2x_{n+1}}},&x_{n+2}&=x_{n+1}+a_{n+1},\\a_{n+2}&={\frac {-a_{n+1}^{2}}{2x_{n+2}}},&x_{n+3}&=x_{n+2}+a_{n+2},{\text{ etc.}}\end{aligned}}} however, this is numerically unstable. Without any reference to the original input value S {\displaystyle S} , the accuracy is limited by that of the original computation of a n {\displaystyle a_{n}} , and that rapidly becomes inadequate. === Example === Using the same example S = 125348 {\displaystyle S=125348} as in the Heron's method example, the first iteration gives x 0 = 600 a 0 = 125348 − 600 2 2 × 600 = − 195.5433 ≈ − 200 x 1 = 600 + ( − 200 ) = − 400 x 2 = 400 − ( − 200 ) 2 2 × 400 = − 350 {\displaystyle {\begin{alignedat}{3}x_{0}&=600\\[1ex]a_{0}&={\frac {125348-600^{2}}{2\times 600}}&&=-195.5433\approx -200\\[1ex]x_{1}&=600+(-200)&&={\phantom {-}}400\\[1ex]x_{2}&=400-{\frac {(-200)^{2}}{2\times 400}}&&={\phantom {-}}350\end{alignedat}}} Likewise the second iteration gives a 2 = 125348 − 350 2 2 × 350 = 00 4.06857 x 3 = 350 + 4.06857 = 354.06857 x 4 = 354.06857 − 4.06857 2 2 × 354.06857 = 354.045194 {\displaystyle {\begin{alignedat}{3}a_{2}&={\frac {125348-350^{2}}{2\times 350}}&&={\phantom {00}}4.06857\\[1ex]x_{3}&=350+4.06857&&=354.06857\\[1ex]x_{4}&=354.06857-{\frac {4.06857^{2}}{2\times 354.06857}}&&=354.045194\end{alignedat}}} Unlike in Heron's method, x 3 {\displaystyle x_{3}} must be computed to 8 digits because the formula for x 4 {\displaystyle x_{4}} does not correct any error in x 3 {\displaystyle x_{3}} . == Digit-by-digit calculation == This is a method to find each digit of the square root in a sequence. This method is based on the binomial theorem and basically an inverse algorithm solving ( x + y ) 2 = x 2 + 2 x y + y 2 {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}} . It is slower than the Babylonian method, but it has several advantages: It can be easier for manual calculations. Every digit of the root found is known to be correct, i.e., it does not have to be changed later. If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number. The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. Disadvantages are: It becomes unmanageable for higher roots. It does not tolerate inaccurate guesses or sub-calculations; such errors lead to every following digit of the result being wrong, unlike with Newton's method, which self-corrects any approximation errors. While digit-by-digit calculation is efficient enough on paper, it is much too expensive for software implementations. Each iteration involves larger numbers, requiring more memory, but only advances the answer by one correct digit. Thus algorithm takes more time for each additional digit. Napier's bones include an aid for the execution of this algorithm. The shifting nth root algorithm is a generalization of this method. === Basic principle === First, consider the case of finding the square root of a number S, that is the square of a base-10 two-digit number XY, where X is the tens digit and Y is the units digit. Specifically: S = ( 10 X + Y ) 2 = 100 X 2 + 20 X Y + Y 2 . {\displaystyle S=\left(10X+Y\right)^{2}=100X^{2}+20XY+Y^{2}.} S will consist of 3 or 4 decimal digits. Now to start the digit-by-digit algorithm, we split the digits of S in two groups of two digits, starting from the right. This means that the first group will be of 1 or 2 digits. Then we determine the value of X as the largest digit such that X2 is less than or equal to the first group. We then compute the difference between the first group and X2 and start the second iteration by concatenating the second group to it. This is equivalent to subtracting 100 X 2 {\displaystyle 100X^{2}} from S, and we're left with S ′ = 20 X Y + Y 2 {\displaystyle S'=20XY+Y^{2}} . We divide S' by 10, then divide it by 2X and keep the integer part to try and guess Y. We concatenate 2X with the tentative Y and multiply it by Y. If our guess is correct, this is equivalent to computing: ( 10 ( 2 X ) + Y ) Y = 20 X Y + Y 2 = S ′ , {\displaystyle (10(2X)+Y)Y=20XY+Y^{2}=S',} and so the remainder, that is the difference between S' and the result, is zero; if the result is higher than S' , we lower our guess by 1 and try again until the remainder is 0. Since this is a simple case where the answer is a perfect square root XY, the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of S by expressing it as a sum of n positive numbers such that S = ( a 1 + a 2 + a 3 + ⋯ + a n ) 2 . {\displaystyle S=\left(a_{1}+a_{2}+a_{3}+\dots +a_{n}\right)^{2}.} By repeatedly applying the basic identity ( x + y ) 2 = x 2 + 2 x y + y 2 , {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2},} the right-hand-side term can be expanded as ( a 1 + a 2 + a 3 + ⋯ + a n ) 2 = a 1 2 + 2 a 1 a 2 + a 2 2 + 2 ( a 1 + a 2 ) a 3 + a 3 2 + ⋯ + a n − 1 2 + 2 ( ∑ i = 1 n − 1 a i ) a n + a n 2 = a 1 2 + [ 2 a 1 + a 2 ] a 2 + [ 2 ( a 1 + a 2 ) + a 3 ] a 3 + ⋯ + [ 2 ( ∑ i = 1 n − 1 a i ) + a n ] a n . {\displaystyle {\begin{aligned}&(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}\\=&\,a_{1}^{2}+2a_{1}a_{2}+a_{2}^{2}+2(a_{1}+a_{2})a_{3}+a_{3}^{2}+\dots +a_{n-1}^{2}+2\left(\sum _{i=1}^{n-1}a_{i}\right)a_{n}+a_{n}^{2}\\=&\,a_{1}^{2}+[2a_{1}+a_{2}]a_{2}+[2(a_{1}+a_{2})+a_{3}]a_{3}+\dots +\left[2\left(\sum _{i=1}^{n-1}a_{i}\right)+a_{n}\right]a_{n}.\end{aligned}}} This expression allows us to find the square root by sequentially guessing the values of a i {\displaystyle a_{i}} s. Suppose that the numbers a 1 , … , a m − 1 {\displaystyle a_{1},\ldots ,a_{m-1}} have already been guessed, then the m-th term of the right-hand-side of the above summation is given by Y m = [ 2 P m − 1 + a m ] a m , {\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\right]a_{m},} where P m − 1 = ∑ i = 1 m − 1 a i {\textstyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}} is the approximate square root found so far. Now each new guess a m {\displaystyle a_{m}} should satisfy the recursion X m = X m − 1 − Y m , {\displaystyle X_{m}=X_{m-1}-Y_{m},} where X m {\displaystyle X_{m}} is the sum of all the terms after Y m {\displaystyle Y_{m}} , i.e. the remainder, such that X m ≥ 0 {\displaystyle X_{m}\geq 0} for all 1 ≤ m ≤ n , {\displaystyle 1\leq m\leq n,} with initialization X 0 = S . {\displaystyle X_{0}=S.} When X n = 0 , {\displaystyle X_{n}=0,} the exact square root has been found; if not, then the sum of the a i {\displaystyle a_{i}} s gives a suitable approximation of the square root, with X n {\displaystyle X_{n}} being the approximation error. For example, in the decimal number system we have S = ( a 1 ⋅ 10 n − 1 + a 2 ⋅ 10 n − 2 + ⋯ + a n − 1 ⋅ 10 + a n ) 2 , {\displaystyle S=\left(a_{1}\cdot 10^{n-1}+a_{2}\cdot 10^{n-2}+\cdots +a_{n-1}\cdot 10+a_{n}\right)^{2},} where 10 n − i {\displaystyle 10^{n-i}} are place holders and the coefficients a i ∈ { 0 , 1 , 2 , … , 9 } {\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}} . At any m-th stage of the square root calculation, the approximate root found so far, P m − 1 {\displaystyle P_{m-1}} and the summation term Y m {\displaystyle Y_{m}} are given by P m − 1 = ∑ i = 1 m − 1 a i ⋅ 10 n − i = 10 n − m + 1 ∑ i = 1 m − 1 a i ⋅ 10 m − i − 1 , {\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}\cdot 10^{n-i}=10^{n-m+1}\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1},} Y m = [ 2 P m − 1 + a m ⋅ 10 n − m ] a m ⋅ 10 n − m = [ 20 ∑ i = 1 m − 1 a i ⋅ 10 m − i − 1 + a m ] a m ⋅ 10 2 ( n − m ) . {\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\cdot 10^{n-m}\right]a_{m}\cdot 10^{n-m}=\left[20\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1}+a_{m}\right]a_{m}\cdot 10^{2(n-m)}.} Here since the place value of Y m {\displaystyle Y_{m}} is an even power of 10, we only need to work with the pair of most significant digits of the remainder X m − 1 {\displaystyle X_{m-1}} , whose first term is Y m {\displaystyle Y_{m}} , at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of a i {\displaystyle a_{i}} is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value of Y m {\displaystyle Y_{m}} is either Y m = 0 {\displaystyle Y_{m}=0} for a m = 0 {\displaystyle a_{m}=0} or Y m = 2 P m − 1 + 1 {\displaystyle Y_{m}=2P_{m-1}+1} for a m = 1 {\displaystyle a_{m}=1} . The fact that we have only two possible options for a m {\displaystyle a_{m}} also makes the process of deciding the value of a m {\displaystyle a_{m}} at m-th stage of calculation easier. This is because we only need to check if Y m ≤ X m − 1 {\displaystyle Y_{m}\leq X_{m-1}} for a m = 1. {\displaystyle a_{m}=1.} If this condition is satisfied, then we take a m = 1 {\displaystyle a_{m}=1} ; if not then a m = 0. {\displaystyle a_{m}=0.} Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation. === Decimal (base 10) === Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c. Find p, y and x, as follows: Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0.) Determine the greatest digit x such that x ( 20 p + x ) ≤ c {\displaystyle x(20p+x)\leq c} . We will use a new variable y = x(20p + x). Note: 20p + x is simply twice p, with the digit x appended to the right. Note: x can be found by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary. Place the digit x {\displaystyle x} as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x. Subtract y from c to form a new remainder. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. ==== Examples ==== Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x=1 01 y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x=2 00 44 y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x=3 07 29 y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x=4 98 56 y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer=12.34 === Binary numeral system (base 2) === This section uses the formalism from the digit-by-digit calculation section above, with the slight variation that we let N 2 = ( a n + ⋯ + a 0 ) 2 {\displaystyle N^{2}=(a_{n}+\dotsb +a_{0})^{2}} , with each a m = 2 m {\displaystyle a_{m}=2^{m}} or a m = 0 {\displaystyle a_{m}=0} . We iterate all 2 m {\displaystyle 2^{m}} , from 2 n {\displaystyle 2^{n}} down to 2 0 {\displaystyle 2^{0}} , and build up an approximate solution P m = a n + a n − 1 + … + a m {\displaystyle P_{m}=a_{n}+a_{n-1}+\ldots +a_{m}} , the sum of all a i {\displaystyle a_{i}} for which we have determined the value. To determine if a m {\displaystyle a_{m}} equals 2 m {\displaystyle 2^{m}} or 0 {\displaystyle 0} , we let P m = P m + 1 + 2 m {\displaystyle P_{m}=P_{m+1}+2^{m}} . If P m 2 ≤ N 2 {\displaystyle P_{m}^{2}\leq N^{2}} (i.e. the square of our approximate solution including 2 m {\displaystyle 2^{m}} does not exceed the target square) then a m = 2 m {\displaystyle a_{m}=2^{m}} , otherwise a m = 0 {\displaystyle a_{m}=0} and P m = P m + 1 {\displaystyle P_{m}=P_{m+1}} . To avoid squaring P m {\displaystyle P_{m}} in each step, we store the difference X m = N 2 − P m 2 {\displaystyle X_{m}=N^{2}-P_{m}^{2}} and incrementally update it by setting X m = X m + 1 − Y m {\displaystyle X_{m}=X_{m+1}-Y_{m}} with Y m = P m 2 − P m + 1 2 = 2 P m + 1 a m + a m 2 {\displaystyle Y_{m}=P_{m}^{2}-P_{m+1}^{2}=2P_{m+1}a_{m}+a_{m}^{2}} . Initially, we set a n = P n = 2 n {\displaystyle a_{n}=P_{n}=2^{n}} for the largest n {\displaystyle n} with ( 2 n ) 2 = 4 n ≤ N 2 {\displaystyle (2^{n})^{2}=4^{n}\leq N^{2}} . As an extra optimization, we store P m + 1 2 m + 1 {\displaystyle P_{m+1}2^{m+1}} and ( 2 m ) 2 {\displaystyle (2^{m})^{2}} , the two terms of Y m {\displaystyle Y_{m}} in case that a m {\displaystyle a_{m}} is nonzero, in separate variables c m {\displaystyle c_{m}} , d m {\displaystyle d_{m}} : c m = P m + 1 2 m + 1 {\displaystyle c_{m}=P_{m+1}2^{m+1}} d m = ( 2 m ) 2 {\displaystyle d_{m}=(2^{m})^{2}} Y m = { c m + d m if a m = 2 m 0 if a m = 0 {\displaystyle Y_{m}={\begin{cases}c_{m}+d_{m}&{\text{if }}a_{m}=2^{m}\\0&{\text{if }}a_{m}=0\end{cases}}} c m {\displaystyle c_{m}} and d m {\displaystyle d_{m}} can be efficiently updated in each step: c m − 1 = P m 2 m = ( P m + 1 + a m ) 2 m = P m + 1 2 m + a m 2 m = { c m / 2 + d m if a m = 2 m c m / 2 if a m = 0 {\displaystyle c_{m-1}=P_{m}2^{m}=(P_{m+1}+a_{m})2^{m}=P_{m+1}2^{m}+a_{m}2^{m}={\begin{cases}c_{m}/2+d_{m}&{\text{if }}a_{m}=2^{m}\\c_{m}/2&{\text{if }}a_{m}=0\end{cases}}} d m − 1 = d m 4 {\displaystyle d_{m-1}={\frac {d_{m}}{4}}} Note that: c − 1 = P 0 2 0 = P 0 = N , {\displaystyle c_{-1}=P_{0}2^{0}=P_{0}=N,} which is the final result returned in the function below. An implementation of this algorithm in C: Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect trading more storage space for reduced run time. == Exponential identity == Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms ( ln ⁡ x n = n ln ⁡ x {\displaystyle \ln x^{n}=n\ln x} ) and exponentials ( e ln ⁡ x = x {\displaystyle e^{\ln x}=x} ): S = e 1 2 ln ⁡ S . {\displaystyle {\sqrt {S}}=e^{{\frac {1}{2}}\ln S}.} The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules. == A two-variable iterative method == This method is applicable for finding the square root of 0 < S < 3 {\displaystyle 0<S<3\,\!} and converges best for S ≈ 1 {\displaystyle S\approx 1} . This, however, is no real limitation for a computer-based calculation, as in base 2 floating-point and fixed-point representations, it is trivial to multiply S {\displaystyle S\,\!} by an integer power of 4, and therefore S {\displaystyle {\sqrt {S}}} by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, S {\displaystyle S\,\!} can be moved to the range 1 2 ≤ S < 2 {\textstyle {\tfrac {1}{2}}\leq S<2} . Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is a 0 = S c 0 = S − 1 {\displaystyle {\begin{aligned}a_{0}&=S\\c_{0}&=S-1\end{aligned}}} while the iterative steps read a n + 1 = a n − a n c n / 2 c n + 1 = c n 2 ( c n − 3 ) / 4 {\displaystyle {\begin{aligned}a_{n+1}&=a_{n}-a_{n}c_{n}/2\\c_{n+1}&=c_{n}^{2}(c_{n}-3)/4\end{aligned}}} Then, a n → S {\displaystyle a_{n}\to {\sqrt {S}}} (while c n → 0 {\displaystyle c_{n}\to 0} ). The convergence of c n {\displaystyle c_{n}\,\!} , and therefore also of a n {\displaystyle a_{n}\,\!} , is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition of c n {\displaystyle c_{n}} as 1 + c n + 1 = ( 1 + c n ) ( 1 − 1 2 c n ) 2 . {\displaystyle 1+c_{n+1}=(1+c_{n})(1-{\tfrac {1}{2}}c_{n})^{2}.} Then it is straightforward to prove by induction that S ( 1 + c n ) = a n 2 {\displaystyle S(1+c_{n})=a_{n}^{2}} and therefore the convergence of a n {\displaystyle a_{n}\,\!} to the desired result S {\displaystyle {\sqrt {S}}} is ensured by the convergence of c n {\displaystyle c_{n}\,\!} to 0, which in turn follows from − 1 < c 0 < 2 {\displaystyle -1<c_{0}<2\,\!} . This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill for use on EDSAC, one of the first electronic computers. The method was later generalized, allowing the computation of non-square roots. == Iterative methods for reciprocal square roots == The following are iterative methods for finding the reciprocal square root of S which is 1 / S {\displaystyle 1/{\sqrt {S}}} . Once it has been found, find S {\displaystyle {\sqrt {S}}} by simple multiplication: S = S ⋅ ( 1 / S ) {\displaystyle {\sqrt {S}}=S\cdot (1/{\sqrt {S}})} . These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. Applying Newton's method to the equation ( 1 / x 2 ) − S = 0 {\displaystyle (1/x^{2})-S=0} produces a method that converges quadratically using three multiplications per step: x n + 1 = x n 2 ⋅ ( 3 − S ⋅ x n 2 ) = x n ⋅ ( 3 2 − S 2 ⋅ x n 2 ) . {\displaystyle x_{n+1}={\frac {x_{n}}{2}}\cdot (3-S\cdot x_{n}^{2})=x_{n}\cdot \left({\frac {3}{2}}-{\frac {S}{2}}\cdot x_{n}^{2}\right).} Another iteration is obtained by Halley's method, which is the Householder's method of order two. This converges cubically, but involves five multiplications per iteration: y n = S ⋅ x n 2 , {\displaystyle y_{n}=S\cdot x_{n}^{2},} and x n + 1 = x n 8 ⋅ ( 15 − y n ⋅ ( 10 − 3 ⋅ y n ) ) = x n ⋅ ( 15 8 − y n ⋅ ( 10 8 − 3 8 ⋅ y n ) ) . {\displaystyle x_{n+1}={\frac {x_{n}}{8}}\cdot (15-y_{n}\cdot (10-3\cdot y_{n}))=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\frac {10}{8}}-{\frac {3}{8}}\cdot y_{n}\right)\right).} If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing 3 / 8 S {\textstyle {\sqrt {3/8}}S} and adjusting all the other constants to compensate: y n = 3 8 S ⋅ x n 2 , {\displaystyle y_{n}={\sqrt {\frac {3}{8}}}S\cdot x_{n}^{2},} and x n + 1 = x n ⋅ ( 15 8 − y n ⋅ ( 25 6 − y n ) ) . {\displaystyle x_{n+1}=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\sqrt {\frac {25}{6}}}-y_{n}\right)\right).} === Goldschmidt's algorithm === Goldschmidt's algorithm is an extension of Goldschmidt division, named after Robert Elliot Goldschmidt, which can be used to calculate square roots. Some computers use Goldschmidt's algorithm to simultaneously calculate S {\displaystyle {\sqrt {S}}} and 1 / S {\displaystyle 1/{\sqrt {S}}} . Goldschmidt's algorithm finds S {\displaystyle {\sqrt {S}}} faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating-point unit or two independent floating-point units. The first way of writing Goldschmidt's algorithm begins b 0 = S {\displaystyle b_{0}=S} Y 0 ≈ 1 / S {\displaystyle Y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup) y 0 = Y 0 {\displaystyle y_{0}=Y_{0}} x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}} and iterates b n + 1 = b n Y n 2 Y n + 1 = 1 2 ( 3 − b n + 1 ) x n + 1 = x n Y n + 1 y n + 1 = y n Y n + 1 {\displaystyle {\begin{aligned}b_{n+1}&=b_{n}Y_{n}^{2}\\Y_{n+1}&={\tfrac {1}{2}}(3-b_{n+1})\\x_{n+1}&=x_{n}Y_{n+1}\\y_{n+1}&=y_{n}Y_{n+1}\end{aligned}}} until b i {\displaystyle b_{i}} is sufficiently close to 1, or a fixed number of iterations. The iterations converge to lim n → ∞ x n = S , {\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},} and lim n → ∞ y n = 1 / S . {\displaystyle \lim _{n\to \infty }y_{n}=1/{\sqrt {S}}.} Note that it is possible to omit either x n {\displaystyle x_{n}} and y n {\displaystyle y_{n}} from the computation, and if both are desired then x n = S y n {\displaystyle x_{n}=Sy_{n}} may be used at the end rather than computing it through in each iteration. A second form, using fused multiply-add operations, begins y 0 ≈ 1 / S {\displaystyle y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup) x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}} h 0 = 1 2 y 0 {\displaystyle h_{0}={\tfrac {1}{2}}y_{0}} and iterates r n = 0.5 − x n h n x n + 1 = x n + x n r n h n + 1 = h n + h n r n {\displaystyle {\begin{aligned}r_{n}&=0.5-x_{n}h_{n}\\x_{n+1}&=x_{n}+x_{n}r_{n}\\h_{n+1}&=h_{n}+h_{n}r_{n}\end{aligned}}} until r i {\displaystyle r_{i}} is sufficiently close to 0, or a fixed number of iterations. This converges to lim n → ∞ x n = S , {\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},} and lim n → ∞ 2 h n = 1 / S . {\displaystyle \lim _{n\to \infty }2h_{n}=1/{\sqrt {S}}.} == Taylor series == If N is an approximation to S {\displaystyle {\sqrt {S}}} , a better approximation can be found by using the Taylor series of the square root function: N 2 + d = N ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! ( 1 − 2 n ) n ! 2 4 n d n N 2 n = N ( 1 + d 2 N 2 − d 2 8 N 4 + d 3 16 N 6 − 5 d 4 128 N 8 + ⋯ ) {\displaystyle {\sqrt {N^{2}+d}}=N\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)n!^{2}4^{n}}}{\frac {d^{n}}{N^{2n}}}=N\left(1+{\frac {d}{2N^{2}}}-{\frac {d^{2}}{8N^{4}}}+{\frac {d^{3}}{16N^{6}}}-{\frac {5d^{4}}{128N^{8}}}+\cdots \right)} As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that | d | N 2 {\displaystyle {\frac {|d|}{N^{2}}}\,} is as small as possible. == Continued fraction expansion == The continued fraction representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. Quadratic irrationals (numbers of the form a + b c {\displaystyle {\frac {a+{\sqrt {b}}}{c}}} , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write S = a 2 + r . {\displaystyle S=a^{2}+r.} Since we have S − a 2 = ( S + a ) ( S − a ) = r {\displaystyle S-a^{2}=({\sqrt {S}}+a)({\sqrt {S}}-a)=r} , we can express the square root of S as S = a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+{\sqrt {S}}}}.} By applying this expression for S {\displaystyle {\sqrt {S}}} to the denominator term of the fraction, we have: S = a + r a + ( a + r a + S ) = a + r 2 a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+(a+{\frac {r}{a+{\sqrt {S}}}})}}=a+{\frac {r}{2a+{\frac {r}{a+{\sqrt {S}}}}}}.} Proceeding this way, we get a generalized continued fraction for the square root as S = a + r 2 a + r 2 a + r 2 a + ⋱ {\displaystyle {\sqrt {S}}=a+{\cfrac {r}{2a+{\cfrac {r}{2a+{\cfrac {r}{2a+\ddots }}}}}}} The first step to evaluating such a fraction to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form, r {\displaystyle r} is 1 and for √2, a {\displaystyle a} is 1, so the numerical continued fraction for 3 denominators is: 2 ≈ 1 + 1 2 + 1 2 + 1 2 {\displaystyle {\sqrt {2}}\approx 1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}} Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators): 1 + 1 2 + 1 2 + 1 2 = 1 + 1 2 + 1 5 2 = 1 + 1 2 + 2 5 = 1 + 1 12 5 = 1 + 5 12 = 17 12 {\displaystyle {\begin{aligned}1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}&=1+{\cfrac {1}{2+{\cfrac {1}{\frac {5}{2}}}}}\\&=1+{\cfrac {1}{2+{\cfrac {2}{5}}}}=1+{\cfrac {1}{\frac {12}{5}}}\\&=1+{\cfrac {5}{12}}={\frac {17}{12}}\end{aligned}}} Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root: 17 ÷ 12 = 1.42 {\displaystyle 17\div 12=1.42} rounded to three digits of precision. The actual value of √2 is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction 41 29 = 1.4137 {\displaystyle {\frac {41}{29}}=1.4137} , good to almost 4 digits of precision, etc. The following are examples of square roots, their simple continued fractions, and their first terms — called convergents — up to and including denominator 99: In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction — e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to √2 as 99/70. == Approximations that depend on the floating point representation == A number is represented in a floating point format as m × b p {\displaystyle m\times b^{p}} which is also called scientific notation. Its square root is m × b p / 2 {\displaystyle {\sqrt {m}}\times b^{p/2}} and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just b p / 2 {\displaystyle b^{p/2}} is good to an order of magnitude. Next, recognise that some powers, p, will be odd, thus for 3141.59 = 3.14159×103 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159×102 so that the square root will be √31.4159×101. If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1 an odd power so the adjustment is to 10.1 × 2−2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is log 2 ⁡ ( m × 2 p ) = p + log 2 ⁡ ( m ) {\displaystyle \log _{2}(m\times 2^{p})=p+\log _{2}(m)} So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by 2 − 23 {\displaystyle 2^{-23}} , and removing a bias of 127, i.e. x int ⋅ 2 − 23 − 127 ≈ log 2 ⁡ ( x ) . {\displaystyle x_{\text{int}}\cdot 2^{-23}-127\approx \log _{2}(x).} For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent 1065353216 = 127 ⋅ 2 23 {\displaystyle 1065353216=127\cdot 2^{23}} if taken as an integer. Using the formula above you get 1065353216 ⋅ 2 − 23 − 127 = 0 {\displaystyle 1065353216\cdot 2^{-23}-127=0} , as expected from log 2 ⁡ ( 1.0 ) {\displaystyle \log _{2}(1.0)} . In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. The exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume b {\displaystyle b} is the exponent bias and n {\displaystyle n} is the number of explicitly stored bits in the mantissa and then show that ( ( 1 2 ( x int / 2 n − b ) ) + b ) ⋅ 2 n = 1 2 ( x int − 2 n ) + ( 1 2 ( b + 1 ) ) ⋅ 2 n . {\displaystyle \left(\left({\tfrac {1}{2}}\left(x_{\text{int}}/2^{n}-b\right)\right)+b\right)\cdot 2^{n}={\tfrac {1}{2}}\left(x_{\text{int}}-2^{n}\right)+\left({\tfrac {1}{2}}\left(b+1\right)\right)\cdot 2^{n}.} The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as where a is a bias for adjusting the approximation errors. For example, with a = 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). With a = −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess for Newton's method to the equation ( 1 / x 2 ) − S = 0 {\displaystyle (1/x^{2})-S=0} , then the reciprocal form shown in the following section is preferred. === Reciprocal of the square root === A variant of the above routine is included below, which can be used to compute the reciprocal of the square root, i.e., x − 1 / 2 {\displaystyle x^{-1/2}} instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line. In computer graphics it is a very efficient way to normalize a vector. Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration. == Negative or complex square == If S < 0, then its principal square root is S = | S | i . {\displaystyle {\sqrt {S}}={\sqrt {\vert S\vert }}\,\,i\,.} If S = a+bi where a and b are real and b ≠ 0, then its principal square root is S = | S | + a 2 + sgn ⁡ ( b ) | S | − a 2 i . {\displaystyle {\sqrt {S}}={\sqrt {\frac {\vert S\vert +a}{2}}}\,+\,\operatorname {sgn}(b){\sqrt {\frac {\vert S\vert -a}{2}}}\,\,i\,.} This can be verified by squaring the root. Here | S | = a 2 + b 2 {\displaystyle \vert S\vert ={\sqrt {a^{2}+b^{2}}}} is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part. == See also == Alpha max plus beta min algorithm nth root algorithm Fast inverse square root == Notes == == References == == Bibliography == == External links == Weisstein, Eric W. "Square root algorithms". MathWorld. Square roots by subtraction Integer Square Root Algorithm by Andrija Radović Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (May 1977) : page 22 Calculator to learn the square root
Wikipedia/Heron's_method
The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. This method converges cubically, an improvement over the Durand–Kerner method, another algorithm for approximating all roots at once, which converges quadratically. (However, both algorithms converge linearly at multiple zeros.) This method is used in MPSolve, which is the reference software for approximating all roots of a polynomial to an arbitrary precision. == Description == Let p ( x ) = p n x n + p n − 1 x n − 1 + ⋯ + p 1 x + p 0 {\displaystyle p(x)=p_{n}x^{n}+p_{n-1}x^{n-1}+\cdots +p_{1}x+p_{0}} be a univariate polynomial of degree n {\displaystyle n} with real or complex coefficients. Then there exist complex numbers z 1 ∗ , z 2 ∗ , … , z n ∗ {\displaystyle z_{1}^{*},\,z_{2}^{*},\dots ,z_{n}^{*}} , the roots of p ( x ) {\displaystyle p(x)} , that give the factorization: p ( x ) = p n ⋅ ( x − z 1 ∗ ) ⋅ ( x − z 2 ∗ ) ⋯ ( x − z n ∗ ) . {\displaystyle p(x)=p_{n}\cdot (x-z_{1}^{*})\cdot (x-z_{2}^{*})\cdots (x-z_{n}^{*}).} Although those numbers are unknown, upper and lower bounds for their absolute values are computable from the coefficients of the polynomial. Now one can pick n {\displaystyle n} distinct numbers in the complex plane—randomly or evenly distributed—such that their absolute values are within the same bounds. (Also, if the zeros are symmetrical, the starting points must not be exactly symmetrical along the same axis, as this can prevent convergence.) A set of such numbers is called an initial approximation of the set of roots of p ( x ) {\displaystyle p(x)} . This approximation can be iteratively improved using the following procedure. Let z 1 , … , z n ∈ C {\displaystyle z_{1},\dots ,z_{n}\in \mathbb {C} } be the current approximations of the zeros of p ( x ) {\displaystyle p(x)} . Then offset numbers w 1 , … , w n ∈ C {\displaystyle w_{1},\dots ,w_{n}\in \mathbb {C} } are computed as w k = p ( z k ) p ′ ( z k ) 1 − p ( z k ) p ′ ( z k ) ⋅ ∑ j ≠ k 1 z k − z j , {\displaystyle w_{k}={\frac {\frac {p(z_{k})}{p'(z_{k})}}{1-{\frac {p(z_{k})}{p'(z_{k})}}\cdot \sum _{j\neq k}{\frac {1}{z_{k}-z_{j}}}}},} where p ′ ( z k ) {\displaystyle p'(z_{k})} is the polynomial derivative of p {\displaystyle p} evaluated in the point z k {\displaystyle z_{k}} . The next set of approximations of roots of p ( x ) {\displaystyle p(x)} is then z 1 − w 1 , … , z n − w n {\displaystyle z_{1}-w_{1},\dots ,z_{n}-w_{n}} . One can measure the quality of the current approximation by the values of the polynomial or by the size of the offsets. Conceptually, this method uses an electrostatic analogy, modeling the approximated zeros as movable negative point charges, which converge toward the true zeros, represented by fixed positive point charges. A direct application of Newton's method to each approximated zero will often cause multiple starting points to incorrectly converge to the same root. The Aberth method avoids this by also modeling the repulsive effect the movable charges have on each other. In this way, when a movable charge has converged on a zero, their charges will cancel out, so that other movable charges are no longer attracted to that location, encouraging them to converge to other "unoccupied" zeros. (Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.) Inside the formula of the Aberth method one can find elements of Newton's method and the Durand–Kerner method. Details for an efficient implementation, esp. on the choice of good initial approximations, can be found in Bini (1996). The updates of the roots may be executed as a simultaneous Jacobi-like iteration where first all new approximations are computed from the old approximations or as a sequential Gauss–Seidel-like iteration that uses each new approximation from the time it is computed. A very similar method is the Newton-Maehly method. It computes the zeros one after another, but instead of an explicit deflation it divides by the already acquired linear factors on the fly. The Aberth method is like the Newton-Maehly method for computing the last root while pretending you have already found the other ones. == Derivation from Newton's method == The iteration formula is the univariate Newton iteration for the function F ( x ) = p ( x ) ∏ j = 1 ; j ≠ k n ( x − z j ) {\displaystyle F(x)={\frac {p(x)}{\prod _{j=1;\,j\neq k}^{n}(x-z_{j})}}} If the values z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} are already close to the roots of p ( x ) {\displaystyle p(x)} , then the rational function F ( x ) {\displaystyle F(x)} is almost linear with a dominant root close to z k {\displaystyle z_{k}} and poles at z 1 , … , z k − 1 , z k + 1 , … , z n {\displaystyle z_{1},\dots ,z_{k-1},z_{k+1},\dots ,z_{n}} that direct the Newton iteration away from the roots of p(x) that are close to them. That is, the corresponding basins of attraction get rather small, while the root close to z k {\displaystyle z_{k}} has a wide region of attraction. The Newton step F ( x ) F ′ ( x ) {\displaystyle {\tfrac {F(x)}{F'(x)}}} in the univariate case is the reciprocal value to the logarithmic derivative F ′ ( x ) F ( x ) = d d x ln ⁡ | F ( x ) | = d d x ( ln ⁡ | p ( x ) | − ∑ j = 1 ; j ≠ k n ln ⁡ | x − z j | ) = p ′ ( x ) p ( x ) − ∑ j = 1 ; j ≠ k n 1 x − z j {\displaystyle {\begin{aligned}{\frac {F'(x)}{F(x)}}&={\frac {d}{dx}}\ln |F(x)|\\&={\frac {d}{dx}}{\big (}\ln |p(x)|-\sum _{j=1;\,j\neq k}^{n}\ln |x-z_{j}|{\big )}\\&={\frac {p'(x)}{p(x)}}-\sum _{j=1;\,j\neq k}^{n}{\frac {1}{x-z_{j}}}\end{aligned}}} Thus, the new approximation is computed as z k ′ = z k − F ( z k ) F ′ ( z k ) = z k − 1 p ′ ( z k ) p ( z k ) − ∑ j = 1 ; j ≠ k n 1 z k − z j , {\displaystyle z_{k}'=z_{k}-{\frac {F(z_{k})}{F'(z_{k})}}=z_{k}-{\frac {1}{{\frac {p'(z_{k})}{p(z_{k})}}-\sum _{j=1;\,j\neq k}^{n}{\frac {1}{z_{k}-z_{j}}}}}\,,} which is the update formula of the Aberth–Ehrlich method. == Literature == == See also == MPSolve A package for numerical computation of polynomial roots. Free usage for scientific purpose.
Wikipedia/Aberth_method
Newton–Krylov methods are numerical methods for solving non-linear problems using Krylov subspace linear solvers. Generalising the Newton method to systems of multiple variables, the iteration formula includes a Jacobian matrix. Solving this directly would involve calculation of the Jacobian's inverse, when the Jacobian matrix itself is often difficult or impossible to calculate. It may be possible to solve the Newton iteration formula without the inverse using a Krylov subspace method, such as the Generalized minimal residual method (GMRES). (Depending on the system, a preconditioner might be required.) The result is a Newton–Krylov method. The Jacobian itself might be too difficult to compute, but the GMRES method does not require the Jacobian itself, only the result of multiplying given vectors by the Jacobian. Often this can be computed efficiently via difference formulae. Solving the Newton iteration formula in this manner, the result is a Jacobian-Free Newton-Krylov (JFNK) method. == References == == External links == Open source code (MATLAB/Octave, Fortran90), further description of the method [1]
Wikipedia/Newton–Krylov_method
In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker. Consequently, the method is also known as the Brent–Dekker method. Modern improvements on Brent's method include Chandrupatla's method, which is simpler and faster for functions that are flat around their roots; Ridders' method, which performs exponential interpolations instead of quadratic providing a simpler closed formula for the iterations; and the ITP method which is a hybrid between regula-falsi and bisection that achieves optimal worst-case and asymptotic guarantees. == Dekker's method == The idea to combine the bisection method with the secant method goes back to Dekker (1969). Suppose that one wants to solve the equation f(x) = 0. As with the bisection method, one needs to initialize Dekker's method with two points, say a0 and b0, such that f(a0) and f(b0) have opposite signs. If f is continuous on [a0, b0], the intermediate value theorem guarantees the existence of a solution between a0 and b0. Three points are involved in every iteration: bk is the current iterate, i.e., the current guess for the root of f. ak is the "contrapoint", i.e., a point such that f(ak) and f(bk) have opposite signs, so the interval [ak, bk] contains the solution. Furthermore, |f(bk)| should be less than or equal to |f(ak)|, so that bk is a better guess for the unknown solution than ak. bk−1 is the previous iterate (for the first iteration, one sets bk−1 = a0). Two provisional values for the next iterate are computed. The first one is given by linear interpolation, also known as the secant method: s = { b k − b k − b k − 1 f ( b k ) − f ( b k − 1 ) f ( b k ) , if f ( b k ) ≠ f ( b k − 1 ) m otherwise {\displaystyle s={\begin{cases}b_{k}-{\frac {b_{k}-b_{k-1}}{f(b_{k})-f(b_{k-1})}}f(b_{k}),&{\mbox{if }}f(b_{k})\neq f(b_{k-1})\\m&{\mbox{otherwise }}\end{cases}}} and the second one is given by the bisection method m = a k + b k 2 . {\displaystyle m={\frac {a_{k}+b_{k}}{2}}.} If the result of the secant method, s, lies strictly between bk and m, then it becomes the next iterate (bk+1 = s), otherwise the midpoint is used (bk+1 = m). Then, the value of the new contrapoint is chosen such that f(ak+1) and f(bk+1) have opposite signs. If f(ak) and f(bk+1) have opposite signs, then the contrapoint remains the same: ak+1 = ak. Otherwise, f(bk+1) and f(bk) have opposite signs, so the new contrapoint becomes ak+1 = bk. Finally, if |f(ak+1)| < |f(bk+1)|, then ak+1 is probably a better guess for the solution than bk+1, and hence the values of ak+1 and bk+1 are exchanged. This ends the description of a single iteration of Dekker's method. Dekker's method performs well if the function f is reasonably well-behaved. However, there are circumstances in which every iteration employs the secant method, but the iterates bk converge very slowly (in particular, |bk − bk−1| may be arbitrarily small). Dekker's method requires far more iterations than the bisection method in this case. == Brent's method == Brent (1973) proposed a small modification to avoid the problem with Dekker's method. He inserts an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. Two inequalities must be simultaneously satisfied: Given a specific numerical tolerance δ {\displaystyle \delta } , if the previous step used the bisection method, the inequality | δ | < | b k − b k − 1 | {\textstyle |\delta |<|b_{k}-b_{k-1}|} must hold to perform interpolation, otherwise the bisection method is performed and its result used for the next iteration. If the previous step performed interpolation, then the inequality | δ | < | b k − 1 − b k − 2 | {\textstyle |\delta |<|b_{k-1}-b_{k-2}|} is used instead to perform the next action (to choose) interpolation (when inequality is true) or bisection method (when inequality is not true). Also, if the previous step used the bisection method, the inequality | s − b k | < 1 2 | b k − b k − 1 | {\textstyle |s-b_{k}|<{\begin{matrix}{\frac {1}{2}}\end{matrix}}|b_{k}-b_{k-1}|} must hold, otherwise the bisection method is performed and its result used for the next iteration. If the previous step performed interpolation, then the inequality | s − b k | < 1 2 | b k − 1 − b k − 2 | {\textstyle |s-b_{k}|<{\begin{matrix}{\frac {1}{2}}\end{matrix}}|b_{k-1}-b_{k-2}|} is used instead. This modification ensures that at the kth iteration, a bisection step will be performed in at most 2 log 2 ⁡ ( | b k − 1 − b k − 2 | / δ ) {\displaystyle 2\log _{2}(|b_{k-1}-b_{k-2}|/\delta )} additional iterations, because the above conditions force consecutive interpolation step sizes to halve every two iterations, and after at most 2 log 2 ⁡ ( | b k − 1 − b k − 2 | / δ ) {\displaystyle 2\log _{2}(|b_{k-1}-b_{k-2}|/\delta )} iterations, the step size will be smaller than δ {\displaystyle \delta } , which invokes a bisection step. Brent proved that his method requires at most N2 iterations, where N denotes the number of iterations for the bisection method. If the function f is well-behaved, then Brent's method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge superlinearly. Furthermore, Brent's method uses inverse quadratic interpolation instead of linear interpolation (as used by the secant method). If f(bk), f(ak) and f(bk−1) are distinct, it slightly increases the efficiency. As a consequence, the condition for accepting s (the value proposed by either linear interpolation or inverse quadratic interpolation) has to be changed: s has to lie between (3ak + bk) / 4 and bk. == Algorithm == input a, b, and (a pointer to) a function for f calculate f(a) calculate f(b) if f(a)f(b) ≥ 0 then exit function because the root is not bracketed. end if if |f(a)| < |f(b)| then swap (a,b) end if c := a set mflag repeat until f(b or s) = 0 or |b − a| is small enough (convergence) if f(a) ≠ f(c) and f(b) ≠ f(c) then s := a f ( b ) f ( c ) ( f ( a ) − f ( b ) ) ( f ( a ) − f ( c ) ) + b f ( a ) f ( c ) ( f ( b ) − f ( a ) ) ( f ( b ) − f ( c ) ) + c f ( a ) f ( b ) ( f ( c ) − f ( a ) ) ( f ( c ) − f ( b ) ) {\textstyle s:={\frac {af(b)f(c)}{(f(a)-f(b))(f(a)-f(c))}}+{\frac {bf(a)f(c)}{(f(b)-f(a))(f(b)-f(c))}}+{\frac {cf(a)f(b)}{(f(c)-f(a))(f(c)-f(b))}}} (inverse quadratic interpolation) else s := b − f ( b ) b − a f ( b ) − f ( a ) {\textstyle s:=b-f(b){\frac {b-a}{f(b)-f(a)}}} (secant method) end if if (condition 1) s is not between ( 3 a + b ) / 4 {\displaystyle (3a+b)/4} and b or (condition 2) (mflag is set and |s−b| ≥ |b−c|/2) or (condition 3) (mflag is cleared and |s−b| ≥ |c−d|/2) or (condition 4) (mflag is set and |b−c| < |δ|) or (condition 5) (mflag is cleared and |c−d| < |δ|) then s := a + b 2 {\textstyle s:={\frac {a+b}{2}}} (bisection method) set mflag else clear mflag end if calculate f(s) d := c (d is assigned for the first time here; it won't be used above on the first iteration because mflag is set) c := b if f(a)f(s) < 0 then b := s else a := s end if if |f(a)| < |f(b)| then swap (a,b) end if end repeat output b or s (return the root) == Example == Suppose that we are seeking a zero of the function defined by f(x) = (x + 3)(x − 1)2. We take [a0, b0] = [−4, 4/3] as our initial interval. We have f(a0) = −25 and f(b0) = 0.48148 (all numbers in this section are rounded), so the conditions f(a0) f(b0) < 0 and |f(b0)| ≤ |f(a0)| are satisfied. In the first iteration, we use linear interpolation between (b−1, f(b−1)) = (a0, f(a0)) = (−4, −25) and (b0, f(b0)) = (1.33333, 0.48148), which yields s = 1.23256. This lies between (3a0 + b0) / 4 and b0, so this value is accepted. Furthermore, f(1.23256) = 0.22891, so we set a1 = a0 and b1 = s = 1.23256. In the second iteration, we use inverse quadratic interpolation between (a1, f(a1)) = (−4, −25) and (b0, f(b0)) = (1.33333, 0.48148) and (b1, f(b1)) = (1.23256, 0.22891). This yields 1.14205, which lies between (3a1 + b1) / 4 and b1. Furthermore, the inequality |1.14205 − b1| ≤ |b0 − b−1| / 2 is satisfied, so this value is accepted. Furthermore, f(1.14205) = 0.083582, so we set a2 = a1 and b2 = 1.14205. In the third iteration, we use inverse quadratic interpolation between (a2, f(a2)) = (−4, −25) and (b1, f(b1)) = (1.23256, 0.22891) and (b2, f(b2)) = (1.14205, 0.083582). This yields 1.09032, which lies between (3a2 + b2) / 4 and b2. But here Brent's additional condition kicks in: the inequality |1.09032 − b2| ≤ |b1 − b0| / 2 is not satisfied, so this value is rejected. Instead, the midpoint m = −1.42897 of the interval [a2, b2] is computed. We have f(m) = 9.26891, so we set a3 = a2 and b3 = −1.42897. In the fourth iteration, we use inverse quadratic interpolation between (a3, f(a3)) = (−4, −25) and (b2, f(b2)) = (1.14205, 0.083582) and (b3, f(b3)) = (−1.42897, 9.26891). This yields 1.15448, which is not in the interval between (3a3 + b3) / 4 and b3). Hence, it is replaced by the midpoint m = −2.71449. We have f(m) = 3.93934, so we set a4 = a3 and b4 = −2.71449. In the fifth iteration, inverse quadratic interpolation yields −3.45500, which lies in the required interval. However, the previous iteration was a bisection step, so the inequality |−3.45500 − b4| ≤ |b4 − b3| / 2 need to be satisfied. This inequality is false, so we use the midpoint m = −3.35724. We have f(m) = −6.78239, so m becomes the new contrapoint (a5 = −3.35724) and the iterate remains the same (b5 = b4). In the sixth iteration, we cannot use inverse quadratic interpolation because b5 = b4. Hence, we use linear interpolation between (a5, f(a5)) = (−3.35724, −6.78239) and (b5, f(b5)) = (−2.71449, 3.93934). The result is s = −2.95064, which satisfies all the conditions. But since the iterate did not change in the previous step, we reject this result and fall back to bisection. We update s = -3.03587, and f(s) = -0.58418. In the seventh iteration, we can again use inverse quadratic interpolation. The result is s = −3.00219, which satisfies all the conditions. Now, f(s) = −0.03515, so we set a7 = b6 and b7 = −3.00219 (a7 and b7 are exchanged so that the condition |f(b7)| ≤ |f(a7)| is satisfied). (Correct : linear interpolation ⁠ s = − 2.99436 , f ( s ) = 0.089961 {\displaystyle s=-2.99436,f(s)=0.089961} ⁠) In the eighth iteration, we cannot use inverse quadratic interpolation because a7 = b6. Linear interpolation yields s = −2.99994, which is accepted. (Correct : ⁠ s = − 2.9999 , f ( s ) = 0.0016 {\displaystyle s=-2.9999,f(s)=0.0016} ⁠) In the following iterations, the root x = −3 is approached rapidly: b9 = −3 + 6·10−8 and b10 = −3 − 3·10−15. (Correct : Iter 9 : f(s) = −1.4 × 10−7, Iter 10 : f(s) = 6.96 × 10−12) == Implementations == Brent (1973) published an Algol 60 implementation. Netlib contains a Fortran translation of this implementation with slight modifications. The PARI/GP method solve implements the method. Other implementations of the algorithm (in C++, C, and Fortran) can be found in the Numerical Recipes books. The Apache Commons Math library implements the algorithm in Java. The SciPy optimize module implements the algorithm in Python (programming language) The Modelica Standard Library implements the algorithm in Modelica. The uniroot function implements the algorithm in R (software). The fzero function implements the algorithm in MATLAB. The Boost (C++ libraries) implements two algorithms based on Brent's method in C++ in the Math toolkit: Function minimization at minima.hpp with an example locating function minima. Root finding implements the newer TOMS748, a more modern and efficient algorithm than Brent's original, at TOMS748, and Boost.Math rooting finding that uses TOMS748 internally with examples. The Optim.jl package implements the algorithm in Julia (programming language) The Emmy computer algebra system (written in Clojure (programming language)) implements a variant of the algorithm designed for univariate function minimization. Root-Finding in C# library hosted in Code Project. == References == Brent, R. P. (1973), "Chapter 4: An Algorithm with Guaranteed Convergence for Finding a Zero of a Function", Algorithms for Minimization without Derivatives, Englewood Cliffs, NJ: Prentice-Hall, ISBN 0-13-022335-2 Dekker, T. J. (1969), "Finding a zero by means of successive linear interpolation", in Dejon, B.; Henrici, P. (eds.), Constructive Aspects of the Fundamental Theorem of Algebra, London: Wiley-Interscience, ISBN 978-0-471-20300-1 == Further reading == Atkinson, Kendall E. (1989). "Section 2.8.". An Introduction to Numerical Analysis (2nd ed.). John Wiley and Sons. ISBN 0-471-50023-2. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 9.3. Van Wijngaarden–Dekker–Brent Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2012-02-28. Alefeld, G. E.; Potra, F. A.; Shi, Yixun (September 1995). "Algorithm 748: Enclosing Zeros of Continuous Functions". ACM Transactions on Mathematical Software. 21 (3): 327–344. doi:10.1145/210089.210111. S2CID 207192624. == External links == zeroin.f at Netlib. module brent in C++ (also C, Fortran, Matlab) Archived 2018-04-05 at the Wayback Machine by John Burkardt GSL implementation. Boost C++ implementation. Python (Scipy) implementation
Wikipedia/Brent's_method
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it. As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others: Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals. == Definition == In mathematics, a linear map (or linear function) f ( x ) {\displaystyle f(x)} is one which satisfies both of the following properties: Additivity or superposition principle: f ( x + y ) = f ( x ) + f ( y ) ; {\displaystyle \textstyle f(x+y)=f(x)+f(y);} Homogeneity: f ( α x ) = α f ( x ) . {\displaystyle \textstyle f(\alpha x)=\alpha f(x).} Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle f ( α x + β y ) = α f ( x ) + β f ( y ) {\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)} An equation written as f ( x ) = C {\displaystyle f(x)=C} is called linear if f ( x ) {\displaystyle f(x)} is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0 {\displaystyle C=0} and f ( x ) {\displaystyle f(x)} is a homogeneous function. The definition f ( x ) = C {\displaystyle f(x)=C} is very general in that x {\displaystyle x} can be any sensible mathematical object (number, vector, function, etc.), and the function f ( x ) {\displaystyle f(x)} can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f ( x ) {\displaystyle f(x)} contains differentiation with respect to x {\displaystyle x} , the result will be a differential equation. == Nonlinear systems of equations == A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation. For a single equation of the form f ( x ) = 0 , {\displaystyle f(x)=0,} many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as x 2 + x − 1 = 0. {\displaystyle x^{2}+x-1=0.} The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation. Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms. For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions. == Nonlinear recurrence relations == A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains. == Nonlinear differential equations == A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology. One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions. === Ordinary differential equations === First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation d u d x = − u 2 {\displaystyle {\frac {du}{dx}}=-u^{2}} has u = 1 x + C {\displaystyle u={\frac {1}{x+C}}} as a general solution (and also the special solution u = 0 , {\displaystyle u=0,} corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as d u d x + u 2 = 0 {\displaystyle {\frac {du}{dx}}+u^{2}=0} and the left-hand side of the equation is not a linear function of u {\displaystyle u} and its derivatives. Note that if the u 2 {\displaystyle u^{2}} term were replaced with u {\displaystyle u} , the problem would be linear (the exponential decay problem). Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered. Common methods for the qualitative analysis of nonlinear ordinary differential equations include: Examination of any conserved quantities, especially in Hamiltonian systems Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities Linearization via Taylor expansion Change of variables into something easier to study Bifurcation theory Perturbation methods (can be applied to algebraic equations too) Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations. === Partial differential equations === The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable. Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation. Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations. === Pendula === A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation d 2 θ d t 2 + sin ⁡ ( θ ) = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0} where gravity points "downwards" and θ {\displaystyle \theta } is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use d θ / d t {\displaystyle d\theta /dt} as an integrating factor, which would eventually yield ∫ d θ C 0 + 2 cos ⁡ ( θ ) = t + C 1 {\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}} which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless C 0 = 2 {\displaystyle C_{0}=2} ). Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at θ = 0 {\displaystyle \theta =0} , called the small angle approximation, is d 2 θ d t 2 + θ = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0} since sin ⁡ ( θ ) ≈ θ {\displaystyle \sin(\theta )\approx \theta } for θ ≈ 0 {\displaystyle \theta \approx 0} . This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at θ = π {\displaystyle \theta =\pi } , corresponding to the pendulum being straight up: d 2 θ d t 2 + π − θ = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0} since sin ⁡ ( θ ) ≈ π − θ {\displaystyle \sin(\theta )\approx \pi -\theta } for θ ≈ π {\displaystyle \theta \approx \pi } . The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that | θ | {\displaystyle |\theta |} will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state. One more interesting linearization is possible around θ = π / 2 {\displaystyle \theta =\pi /2} , around which sin ⁡ ( θ ) ≈ 1 {\displaystyle \sin(\theta )\approx 1} : d 2 θ d t 2 + 1 = 0. {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.} This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods. == Types of nonlinear dynamic behaviors == Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic Multistability – the presence of two or more stable states Solitons – self-reinforcing solitary waves Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted. Self-oscillations – feedback oscillations taking place in open dissipative physical systems. == Examples of nonlinear equations == == See also == == References == == Further reading == == External links == Command and Control Research Program (CCRP) New England Complex Systems Institute: Concepts in Complex Systems Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare Nonlinear Model Library – (in MATLAB) a Database of Physical Systems The Center for Nonlinear Studies at Los Alamos National Laboratory
Wikipedia/System_of_nonlinear_equations
In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation p(x) = 0 for a given polynomial p(x). One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to some root of the polynomial, no matter what initial guess is chosen. However, for computer computation, more efficient methods are known, with which it is guaranteed to find all roots (see Root-finding algorithm § Roots of polynomials) or all real roots (see Real-root isolation). This method is named in honour of the French mathematician, Edmond Laguerre. == Definition == The algorithm of the Laguerre method to find one root of a polynomial p(x) of degree n is: Choose an initial guess x0 For k = 0, 1, 2, ... If p ( x k ) {\displaystyle p(x_{k})} is very small, exit the loop Calculate G = p ′ ( x k ) p ( x k ) {\displaystyle G={\frac {p'(x_{k})}{p(x_{k})}}} Calculate H = G 2 − p ″ ( x k ) p ( x k ) {\displaystyle H=G^{2}-{\frac {p''(x_{k})}{p(x_{k})}}} Calculate a = n G ± ( n − 1 ) ( n H − G 2 ) {\displaystyle a={\frac {n}{G\pm {\sqrt {(n-1)(nH-G^{2})}}}}} , where the sign is chosen to give the denominator with the larger absolute value, to avoid catastrophic cancellation. Set x k + 1 = x k − a {\displaystyle x_{k+1}=x_{k}-a} Repeat until a is small enough or if the maximum number of iterations has been reached. If a root has been found, the corresponding linear factor can be removed from p. This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of p can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude. == Derivation == The fundamental theorem of algebra states that every nth degree polynomial p {\displaystyle p} can be written in the form p ( x ) = C ( x − x 1 ) ( x − x 2 ) ⋯ ( x − x n ) , {\displaystyle p(x)=C\left(x-x_{1}\right)\left(x-x_{2}\right)\cdots \left(x-x_{n}\right),} so that x 1 , x 2 , … , x n , {\displaystyle x_{1},\ x_{2},\ \ldots ,\ x_{n},} are the roots of the polynomial. If we take the natural logarithm of both sides, we find that ln ⁡ | p ( x ) | = ln ⁡ | C | + ln ⁡ | x − x 1 | + ln ⁡ | x − x 2 | + ⋯ + ln ⁡ | x − x n | . {\displaystyle \ln {\bigl |}p(x){\bigr |}=\ln {\bigl |}C{\bigr |}+\ln {\bigl |}x-x_{1}{\bigr |}+\ln {\bigl |}x-x_{2}{\bigr |}+\cdots +\ln {\bigl |}x-x_{n}{\bigr |}.} Denote the logarithmic derivative by G = d d ⁡ x ln ⁡ | p ( x ) | = 1 x − x 1 + 1 x − x 2 + ⋯ + 1 x − x n = p ′ ( x ) | p ( x ) | , {\displaystyle {\begin{aligned}G&={\frac {\operatorname {d} }{\operatorname {d} x}}\ln {\Bigl |}p(x){\Bigr |}={\frac {1}{x-x_{1}}}+{\frac {1}{x-x_{2}}}+\cdots +{\frac {1}{x-x_{n}}}\\&={\frac {p'(x)}{{\bigl |}p(x){\bigr |}}},\end{aligned}}} and the negated second derivative by H = − d 2 d ⁡ x 2 ln ⁡ | p ( x ) | = 1 ( x − x 1 ) 2 + 1 ( x − x 2 ) 2 + ⋯ + 1 ( x − x n ) 2 = − p ″ ( x ) | p ( x ) | + ( p ′ ( x ) p ( x ) ) 2 ⋅ sgn ( p ( x ) ) . {\displaystyle {\begin{aligned}\ H&=-{\frac {\operatorname {d} ^{2}}{\operatorname {d} x^{2}}}\ln {\Bigl |}p(x){\Bigr |}={\frac {1}{(x-x_{1})^{2}}}+{\frac {1}{(x-x_{2})^{2}}}+\cdots +{\frac {1}{(x-x_{n})^{2}}}\\&=-{\frac {p''(x)}{{\bigl |}p(x){\bigr |}}}+\left({\frac {p'(x)}{p(x)}}\right)^{2}\cdot \ \operatorname {sgn} \!{\Bigl (}p(x){\Bigr )}.\end{aligned}}} We then make what Acton (1970) calls a "drastic set of assumptions", that the root we are looking for, say, x 1 {\displaystyle x_{1}} is a short distance, a , {\displaystyle a,} away from our guess x , {\displaystyle x,} and all the other roots are all clustered together, at some further distance b . {\displaystyle b.} If we denote these distances by a ≡ x − x 1 {\displaystyle a\equiv x-x_{1}} and b ≈ x − x 2 ≈ x − x 3 ≈ ⋯ ≈ x − x n , {\displaystyle b\approx x-x_{2}\approx x-x_{3}\approx \cdots \approx x-x_{n},} or exactly, b ≡ h a r m o n i c m e a n ⁡ { x − x 2 , x − x 3 , … x − x n } {\displaystyle b\equiv \operatorname {harmonic\ mean} {\Bigl \{}x-x_{2},\ x-x_{3},\ \ldots \ x-x_{n}{\Bigr \}}} then our equation for G {\displaystyle \ G\ } may be written as G = 1 a + n − 1 b {\displaystyle G={\frac {1}{a}}+{\frac {n-1}{b}}} and the expression for H {\displaystyle H} becomes H = 1 a 2 + n − 1 b 2 . {\displaystyle H={\frac {1}{a^{2}}}+{\frac {n-1}{b^{2}}}.} Solving these equations for a , {\displaystyle a,} we find that a = n G ± ( n − 1 ) ( n H − G 2 ) , {\displaystyle a={\frac {n}{G\pm {\sqrt {{\bigl (}n-1{\bigr )}{\bigl (}nH-G^{2}{\bigr )}}}}},} where in this case, the square root of the (possibly) complex number is chosen to produce largest absolute value of the denominator and make a {\displaystyle \ a\ } as small as possible; equivalently, it satisfies: R e ⁡ { G ¯ ( n − 1 ) ( n H − G 2 ) } > 0 , {\displaystyle \operatorname {\mathcal {R_{e}}} {\biggl \{}{\overline {G}}{\sqrt {\left(n-1\right)\left(nH-G^{2}\right)}}{\biggr \}}>0,} where R e {\displaystyle {\mathcal {R_{e}}}} denotes real part of a complex number, and G ¯ {\displaystyle {\overline {G}}} is the complex conjugate of G ; {\displaystyle G;} or a = p ( x ) p ′ ( x ) ⋅ { 1 n + n − 1 n 1 − n n − 1 p ( x ) p ″ ( x ) p ′ ( x ) 2 } − 1 , {\displaystyle a={\frac {p(x)}{p'(x)}}\cdot {\Biggl \{}{\frac {1}{n}}+{\frac {n-1}{n}}{\sqrt {1-{\frac {n}{n-1}}{\frac {p(x)\ p''(x)}{p'(x)^{2}}}}}{\Biggr \}}^{-1},} where the square root of a complex number is chosen to have a non-negative real part. For small values of p ( x ) {\displaystyle p(x)} this formula differs from the offset of the third order Halley's method by an error of O ⁡ { ( p ( x ) ) 3 } , {\displaystyle \operatorname {\mathcal {O}} {\bigl \{}(p(x))^{3}{\bigr \}},} so convergence close to a root will be cubic as well. === Fallback === Even if the "drastic set of assumptions" does not work well for some particular polynomial p(x), then p(x) can be transformed into a related polynomial r for which the assumptions are viable; e.g. by first shifting the origin towards a suitable complex number w, giving a second polynomial q(x) = p(x − w), that give distinct roots clearly distinct magnitudes, if necessary (which it will be if some roots are complex conjugates). After that, getting a third polynomial r from q(x) by repeatedly applying the root squaring transformation from Graeffe's method, enough times to make the smaller roots significantly smaller than the largest root (and so, clustered comparatively nearer to zero). The approximate root from Graeffe's method, can then be used to start the new iteration for Laguerre's method on r. An approximate root for p(x) may then be obtained straightforwardly from that for r. If we make the even more extreme assumption that the terms in G {\displaystyle G} corresponding to the roots x 2 , x 3 , … , x n {\displaystyle x_{2},\ x_{3},\ \ldots ,\ x_{n}} are negligibly small compared to the root x 1 , {\displaystyle x_{1},} this leads to Newton's method. == Properties == If x is a simple root of the polynomial p ( x ) , {\displaystyle p(x),} then Laguerre's method converges cubically whenever the initial guess, x ( 0 ) , {\displaystyle x^{(0)},} is close enough to the root x 1 . {\displaystyle x_{1}.} On the other hand, when x 1 {\displaystyle x_{1}} is a multiple root convergence is merely linear, with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration. A major advantage of Laguerre's method is that it is almost guaranteed to converge to some root of the polynomial no matter where the initial approximation is chosen. This is in contrast to other methods such as the Newton–Raphson method and Stephensen's method, which notoriously fail to converge for poorly chosen initial guesses. Laguerre's method may even converge to a complex root of the polynomial, because the radicand of the square root may be of a negative number, in the formula for the correction, a , {\displaystyle a,} given above – manageable so long as complex numbers can be conveniently accommodated for the calculation. This may be considered an advantage or a liability depending on the application to which the method is being used. Empirical evidence shows that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as a default, and prefer better understood methods such as the Jenkins–Traub algorithm, for which more solid theory has been developed and whose limits are known. The algorithm is fairly simple to use, compared to other "sure-fire" methods, and simple enough for hand calculation, aided by a pocket calculator, if a computer is not available. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy. == References ==
Wikipedia/Laguerre's_method
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method, so it is considered a quasi-Newton method. Historically, it is as an evolution of the method of false position, which predates Newton's method by over 3000 years. == The method == The secant method is an iterative numerical method for finding a zero of a function f. Given two initial values x0 and x1, the method proceeds according to the recurrence relation x n = x n − 1 − f ( x n − 1 ) x n − 1 − x n − 2 f ( x n − 1 ) − f ( x n − 2 ) = x n − 2 f ( x n − 1 ) − x n − 1 f ( x n − 2 ) f ( x n − 1 ) − f ( x n − 2 ) . {\displaystyle x_{n}=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={\frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}.} This is a nonlinear second-order recurrence that is well-defined given f and the two initial values x0 and x1. Ideally, the initial values should be chosen close to the desired zero. == Derivation of the method == Starting with initial values x0 and x1, we construct a line through the points (x0, f(x0)) and (x1, f(x1)), as shown in the picture above. In point–point form, the equation of this line is y = f ( x 1 ) − f ( x 0 ) x 1 − x 0 ( x − x 1 ) + f ( x 1 ) . {\displaystyle y={\frac {f(x_{1})-f(x_{0})}{x_{1}-x_{0}}}(x-x_{1})+f(x_{1}).} The root of this linear function, that is the value of x such that y = 0 is x = x 1 − f ( x 1 ) x 1 − x 0 f ( x 1 ) − f ( x 0 ) . {\displaystyle x=x_{1}-f(x_{1}){\frac {x_{1}-x_{0}}{f(x_{1})-f(x_{0})}}.} We then use this new value of x as x2 and repeat the process, using x1 and x2 instead of x0 and x1. We continue this process, solving for x3, x4, etc., until we reach a sufficiently high level of precision (a sufficiently small difference between xn and xn−1): x 2 = x 1 − f ( x 1 ) x 1 − x 0 f ( x 1 ) − f ( x 0 ) , x 3 = x 2 − f ( x 2 ) x 2 − x 1 f ( x 2 ) − f ( x 1 ) , ⋮ x n = x n − 1 − f ( x n − 1 ) x n − 1 − x n − 2 f ( x n − 1 ) − f ( x n − 2 ) . {\displaystyle {\begin{aligned}x_{2}&=x_{1}-f(x_{1}){\frac {x_{1}-x_{0}}{f(x_{1})-f(x_{0})}},\\[6pt]x_{3}&=x_{2}-f(x_{2}){\frac {x_{2}-x_{1}}{f(x_{2})-f(x_{1})}},\\[6pt]&\,\,\,\vdots \\[6pt]x_{n}&=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}.\end{aligned}}} == Convergence == The iterates x n {\displaystyle x_{n}} of the secant method converge to a root of f {\displaystyle f} if the initial values x 0 {\displaystyle x_{0}} and x 1 {\displaystyle x_{1}} are sufficiently close to the root and f {\displaystyle f} is well-behaved. When f {\displaystyle f} is twice continuously differentiable and the root in question is a simple root, i.e., it has multiplicity 1, the order of convergence is the golden ratio φ = ( 1 + 5 ) / 2 ≈ 1.618. {\displaystyle \varphi =(1+{\sqrt {5}})/2\approx 1.618.} This convergence is superlinear but subquadratic. If the initial values are not close enough to the root or f {\displaystyle f} is not well-behaved, then there is no guarantee that the secant method converges at all. There is no general definition of "close enough", but the criterion for convergence has to do with how "wiggly" the function is on the interval between the initial values. For example, if f {\displaystyle f} is differentiable on that interval and there is a point where f ′ = 0 {\displaystyle f'=0} on the interval, then the algorithm may not converge. == Comparison with other root-finding methods == The secant method does not require or guarantee that the root remains bracketed by sequential iterates, like the bisection method does, and hence it does not always converge. The false position method (or regula falsi) uses the same formula as the secant method. However, it does not apply the formula on x n − 1 {\displaystyle x_{n-1}} and x n − 2 {\displaystyle x_{n-2}} , like the secant method, but on x n − 1 {\displaystyle x_{n-1}} and on the last iterate x k {\displaystyle x_{k}} such that f ( x k ) {\displaystyle f(x_{k})} and f ( x n − 1 ) {\displaystyle f(x_{n-1})} have a different sign. This means that the false position method always converges; however, only with a linear order of convergence. Bracketing with a super-linear order of convergence as the secant method can be attained with improvements to the false position method (see Regula falsi § Improvements in regula falsi) such as the ITP method or the Illinois method. The recurrence formula of the secant method can be derived from the formula for Newton's method x n = x n − 1 − f ( x n − 1 ) f ′ ( x n − 1 ) {\displaystyle x_{n}=x_{n-1}-{\frac {f(x_{n-1})}{f'(x_{n-1})}}} by using the finite-difference approximation, for a small ϵ = x n − 1 − x n − 2 {\displaystyle \epsilon =x_{n-1}-x_{n-2}} : f ′ ( x n − 1 ) = lim ϵ → 0 f ( x n − 1 ) − f ( x n − 1 − ϵ ) ϵ ≈ f ( x n − 1 ) − f ( x n − 2 ) x n − 1 − x n − 2 {\displaystyle f'(x_{n-1})=\lim _{\epsilon \rightarrow 0}{\frac {f(x_{n-1})-f(x_{n-1}-\epsilon )}{\epsilon }}\approx {\frac {f(x_{n-1})-f(x_{n-2})}{x_{n-1}-x_{n-2}}}} The secant method can be interpreted as a method in which the derivative is replaced by an approximation and is thus a quasi-Newton method. If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against order the golden ratio φ ≈ 1.6). However, Newton's method requires the evaluation of both f {\displaystyle f} and its derivative f ′ {\displaystyle f'} at every step, while the secant method only requires the evaluation of f {\displaystyle f} . Therefore, the secant method may sometimes be faster in practice. For instance, if we assume that evaluating f {\displaystyle f} takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor φ2 ≈ 2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor of 2), so the secant method is faster. In higher dimensions, the full set of partial derivatives required for Newton's method, that is, the Jacobian matrix, may become much more expensive to calculate than the function itself. If, however, we consider parallel processing for the evaluation of the derivative or derivatives, Newton's method can be faster in clock time though still costing more computational operations overall. == Generalization == Broyden's method is a generalization of the secant method to more than one dimension. The following graph shows the function f in red and the last secant line in bold blue. In the graph, the x intercept of the secant line seems to be a good approximation of the root of f. == Computational example == Below, the secant method is implemented in the Python programming language. It is then applied to find a root of the function f(x) = x2 − 612 with initial points x 0 = 10 {\displaystyle x_{0}=10} and x 1 = 30 {\displaystyle x_{1}=30} It is very important to have a good stopping criterion above, otherwise, due to limited numerical precision of floating point numbers, the algorithm can return inaccurate results if running for too many iterations. For example, the loop above can stop when one of these is reached first: abs(x0 - x1) < tol, or abs(x0/x1-1) < tol, or abs(f(x1)) < tol. == Notes == == See also == False position method == References == Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Prentice Hall. pp. 220–221. ISBN 0-13-623603-0. Allen, Myron B.; Isaacson, Eli L. (1998). Numerical analysis for applied science. John Wiley & Sons. pp. 188–195. ISBN 978-0-471-55266-6. == External links == Secant Method Notes, PPT, Mathcad, Maple, Mathematica, Matlab at Holistic Numerical Methods Institute Weisstein, Eric W. "Secant Method". MathWorld.
Wikipedia/Secant_method
Square root algorithms compute the non-negative square root S {\displaystyle {\sqrt {S}}} of a positive real number S {\displaystyle S} . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these algorithms typically construct a series of increasingly accurate approximations. Most square root computation methods are iterative: after choosing a suitable initial estimate of S {\displaystyle {\sqrt {S}}} , an iterative refinement is performed until some termination criterion is met. One refinement scheme is Heron's method, a special case of Newton's method. If division is much more costly than multiplication, it may be preferable to compute the inverse square root instead. Other methods are available to compute the square root digit by digit, or using Taylor series. Rational approximations of square roots may be calculated using continued fraction expansions. The method employed depends on the needed accuracy, and the available tools and computational power. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an integer square root is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). == History == Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Babylonian mathematicians calculated the square root of 2 to three sexagesimal "digits" after the 1, but it is not known exactly how. They knew how to approximate a hypotenuse using a 2 + b 2 ≈ a + b 2 2 a {\displaystyle {\sqrt {a^{2}+b^{2}}}\approx a+{\frac {b^{2}}{2a}}} (giving for example 41 60 + 15 3600 {\displaystyle {\frac {41}{60}}+{\frac {15}{3600}}} for the diagonal of a gate whose height is 40 60 {\displaystyle {\frac {40}{60}}} rods and whose width is 10 60 {\displaystyle {\frac {10}{60}}} rods) and they may have used a similar approach for finding the approximation of 2 . {\displaystyle {\sqrt {2}}.} Heron's method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures. == Initial estimate == Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and S {\displaystyle S} , the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with x 0 = 1 {\displaystyle x_{0}=1} (or S {\displaystyle S} ), then approximately 1 2 | log 2 ⁡ S | {\displaystyle {\tfrac {1}{2}}\vert \log _{2}S\vert } iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method, a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as [ x 0 , S / x 0 ] {\displaystyle [x_{0},S/x_{0}]} ). The estimate is a specific value of a functional approximation to f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to f ( x ) {\displaystyle f(x)} . The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation. === Decimal estimates === Typically the number S {\displaystyle S} is expressed in scientific notation as a × 10 2 n {\displaystyle a\times 10^{2n}} where 1 ≤ a < 100 {\displaystyle 1\leq a<100} and n is an integer, and the range of possible square roots is a × 10 n {\displaystyle {\sqrt {a}}\times 10^{n}} where 1 ≤ a < 10 {\displaystyle 1\leq {\sqrt {a}}<10} . ==== Scalar estimates ==== Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean ( 10 ≈ 3.16 {\displaystyle {\sqrt {10}}\approx 3.16} ) times 10 n {\displaystyle 10^{n}} are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square root S = a × 10 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 10^{n}} can be estimated as S ≈ { 2 ⋅ 10 n if a < 10 , 6 ⋅ 10 n if a ≥ 10. {\displaystyle {\sqrt {S}}\approx {\begin{cases}2\cdot 10^{n}&{\text{if }}a<10,\\6\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} This estimate has maximum absolute error of 4 ⋅ 10 n {\displaystyle 4\cdot 10^{n}} at a = 100, and maximum relative error of 100% at a = 1. For example, for S = 125348 {\displaystyle S=125348} factored as 12.5348 × 10 4 {\displaystyle 12.5348\times 10^{4}} , the estimate is S ≈ 6 ⋅ 10 2 = 600 {\displaystyle {\sqrt {S}}\approx 6\cdot 10^{2}=600} . 125348 = 354.0 {\displaystyle {\sqrt {125348}}=354.0} , an absolute error of 246 and relative error of almost 70%. ==== Linear estimates ==== A better estimate, and the standard method used, is a linear approximation to the function y = x 2 {\displaystyle y=x^{2}} over a small arc. If, as above, powers of the base are factored out of the number S {\displaystyle S} and the interval reduced to [ 1 , 100 ] {\displaystyle [1,100]} , a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is y = 8.7 x − 10 {\displaystyle y=8.7x-10} . Reordering, x = 0.115 y + 1.15 {\displaystyle x=0.115y+1.15} . Rounding the coefficients for ease of computation, S ≈ ( a / 10 + 1.2 ) ⋅ 10 n {\displaystyle {\sqrt {S}}\approx (a/10+1.2)\cdot 10^{n}} That is the best estimate on average that can be achieved with a single piece linear approximation of the function y=x2 in the interval [ 1 , 100 ] {\displaystyle [1,100]} . It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10. To divide by 10, subtract one from the exponent of a {\displaystyle a} , or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range [ 1 , 100 ] {\displaystyle [1,100]} is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc from y = 1 to y = 100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1×100, i.e. [1,2√100] and [2√100,100]. For three intervals, the bounds are the cube roots of 100: [1,3√100], [3√100,(3√100)2], and [(3√100)2,100], etc. For two intervals, 2√100 = 10, a very convenient number. Tangent lines are easy to derive, and are located at x = √1*√10 and x = √10*√10. Their equations are: y = 3.56 x − 3.16 {\displaystyle y=3.56x-3.16} and y = 11.2 x − 31.6 {\displaystyle y=11.2x-31.6} . Inverting, the square roots are: x = 0.28 y + 0.89 {\displaystyle x=0.28y+0.89} and x = .089 y + 2.8 {\displaystyle x=.089y+2.8} . Thus for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} : S ≈ { ( 0.28 a + 0.89 ) ⋅ 10 n if a < 10 , ( .089 a + 2.8 ) ⋅ 10 n if a ≥ 10. {\displaystyle {\sqrt {S}}\approx {\begin{cases}(0.28a+0.89)\cdot 10^{n}&{\text{if }}a<10,\\(.089a+2.8)\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy. ==== Hyperbolic estimates ==== In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval [ 1 , 100 ] {\displaystyle [1,100]} is y = 190/(10−x) − 20. Transposing, the square root is x = 10 − 190/(y+20). Thus for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} : S ≈ ( 10 − 190 a + 20 ) ⋅ 10 n {\displaystyle {\sqrt {S}}\approx \left(10-{\frac {190}{a+20}}\right)\cdot 10^{n}} The division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. This hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at a = 100 and maximum relative error at a = 10, where the estimate of 3.67 is 16.0% higher than the root of 3.16. If instead one performed Newton-Raphson iterations beginning with an estimate of 10, it would take two iterations to get to 3.66, matching the hyperbolic estimate. For a more typical case like 75, the hyperbolic estimate of 8.00 is only 7.6% low, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result. ==== Arithmetic estimates ==== A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals halfway between the squares. So any number between 25 and halfway to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6. The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: The final operation is to multiply the estimate k by the power of ten divided by 2, so for S = a ⋅ 10 2 n {\displaystyle S=a\cdot 10^{2n}} , S ≈ k ⋅ 10 n {\displaystyle {\sqrt {S}}\approx k\cdot 10^{n}} The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If k 2 ≤ a < ( k + 1 ) 2 {\displaystyle k^{2}\leq a<(k+1)^{2}} , then a {\displaystyle {\sqrt {a}}} is approximately k plus a fraction, the difference between a and k2 divided by the difference between the two squares: a ≈ k + R {\displaystyle {\sqrt {a}}\approx k+R} where R = ( a − k 2 ) ( k + 1 ) 2 − k 2 {\displaystyle R={\frac {(a-k^{2})}{(k+1)^{2}-k^{2}}}} The final operation, as above, is to multiply the result by the power of ten divided by 2; S = a ⋅ 10 n ≈ ( k + R ) ⋅ 10 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\cdot 10^{n}\approx (k+R)\cdot 10^{n}} k is a decimal digit and R is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75. 75 = 75 × 102 · 0, so a is 75 and n is 0. From the multiplication tables, the square root of the mantissa must be 8 point something because a is between 8×8 = 64 and 9×9 = 81, so k is 8; something is the decimal representation of R. The fraction R is 75 − k2 = 11, the numerator, and 81 − k2 = 17, the denominator. 11/17 is a little less than 12/18 = 2/3 = .67, so guess .66 (it's okay to guess here, the error is very small). The final estimate is 8 + .66 = 8.66. √75 to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close. === Binary estimates === When working in the binary numeral system (as computers do internally), by expressing S {\displaystyle S} as a × 2 2 n {\displaystyle a\times 2^{2n}} where 0.1 2 ≤ a < 10 2 {\displaystyle 0.1_{2}\leq a<10_{2}} , the square root S = a × 2 n {\displaystyle {\sqrt {S}}={\sqrt {a}}\times 2^{n}} can be estimated as S ≈ ( 0.485 + 0.485 ⋅ a ) ⋅ 2 n {\displaystyle {\sqrt {S}}\approx (0.485+0.485\cdot a)\cdot 2^{n}} which is the least-squares regression line to 3 significant digit coefficients. a {\displaystyle {\sqrt {a}}} has maximum absolute error of 0.0408 at a {\displaystyle a} =2, and maximum relative error of 3.0% at a {\displaystyle a} =1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: S ≈ ( 0.5 + 0.5 ⋅ a ) ⋅ 2 n {\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{n}} which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at a = 0.5 and a = 2.0. For S = 125348 = 1 1110 1001 1010 0100 2 = 1.1110 1001 1010 0100 2 × 2 16 {\displaystyle S=125348=1\;1110\;1001\;1010\;0100_{2}=1.1110\;1001\;1010\;0100_{2}\times 2^{16}\,} , the binary approximation gives S ≈ ( 0.5 + 0.5 ⋅ a ) ⋅ 2 8 = 1.0111 0100 1101 0010 2 ⋅ 1 0000 0000 2 = 1.456 ⋅ 256 = 372.8 {\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{8}=1.0111\;0100\;1101\;0010_{2}\cdot 1\;0000\;0000_{2}=1.456\cdot 256=372.8} . 125348 = 354.0 {\displaystyle {\sqrt {125348}}=354.0} , so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate for a {\displaystyle a} good to 8 bits can be obtained by table lookup on the high 8 bits of a {\displaystyle a} , remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits). == Heron's method == The first explicit algorithm for approximating S {\displaystyle \ {\sqrt {S~}}\ } is known as Heron's method, after the first-century Greek mathematician Hero of Alexandria who described the method in his AD 60 work Metrica. This method is also called the Babylonian method (not to be confused with the Babylonian method for approximating hypotenuses), although there is no evidence that the method was known to Babylonians. Given a positive real number S {\displaystyle S} , let x0 > 0 be any positive initial estimate. Heron's method consists in iteratively computing x n + 1 = 1 2 ( x n + S x n ) , {\displaystyle x_{n+1}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right),} until the desired accuracy is achieved. The sequence ( x 0 , x 1 , x 2 , x 3 , … ) {\displaystyle \ {\bigl (}\ x_{0},\ x_{1},\ x_{2},\ x_{3},\ \ldots \ {\bigr )}\ } defined by this equation converges to lim n → ∞ x n = S . {\displaystyle \ \lim _{n\to \infty }x_{n}={\sqrt {S~}}~.} This is equivalent to using Newton's method to solve x 2 − S = 0 {\displaystyle x^{2}-S=0} . This algorithm is quadratically convergent: the number of correct digits of x n {\displaystyle x_{n}} roughly doubles with each iteration. === Derivation === The basic idea is that if x {\displaystyle \ x\ } is an overestimate to the square root of a non-negative real number S {\displaystyle \ S\ } then S x {\displaystyle \ {\tfrac {\ S\ }{x}}\ } will be an underestimate, and vice versa, so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence). More precisely, if x {\displaystyle \ x\ } is our initial guess of S {\displaystyle \ {\sqrt {S~}}\ } and ε {\displaystyle \ \varepsilon \ } is the error in our estimate such that S = ( x + ε ) 2 , {\displaystyle \ S=\left(x+\varepsilon \right)^{2}\ ,} then we can expand the binomial as: ( x + ε ) 2 = x 2 + 2 x ε + ε 2 {\displaystyle \ {\bigl (}\ x+\varepsilon \ {\bigr )}^{2}=x^{2}+2x\varepsilon +\varepsilon ^{2}} and solve for the error term ε = S − x 2 2 x + ε ≈ S − x 2 2 x , {\displaystyle \varepsilon ={\frac {\ S-x^{2}\ }{\ 2x+\varepsilon \ }}\approx {\frac {\ S-x^{2}\ }{2x}}\ ,} if we suppose that ε ≪ x {\displaystyle \ \varepsilon \ll x~} Therefore, we can compensate for the error and update our old estimate as x + ε ≈ x + S − x 2 2 x = S + x 2 2 x = S x + x 2 ≡ x r e v i s e d . {\displaystyle \ x+\varepsilon \ \approx \ x+{\frac {\ S-x^{2}\ }{2x}}\ =\ {\frac {\ S+x^{2}\ }{2x}}\ =\ {\frac {\ {\frac {S}{\ x\ }}+x\ }{2}}\ \equiv \ x_{\mathsf {revised}}~.} Since the computed error was not exact, this is not the actual answer, but becomes our new guess to use in the next round of correction. The process of updating is iterated until desired accuracy is obtained. This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics. === Example === To calculate S {\displaystyle {\sqrt {S\,}}} for S = 125348 {\displaystyle S=125348} to seven significant figures, use the rough estimation method above to get x 0 = 6 ⋅ 10 2 = 600 x 1 = 1 2 ( x 0 + S x 0 ) = 1 2 ( 600 .1 + 125348 600 ) = 404.457 ≈ 400 x 2 = 1 2 ( x 1 + S x 1 ) = 1 2 ( 400 .1 + 125348 400 ) = 356.685 ≈ 360 x 3 = 1 2 ( x 2 + S x 2 ) = 1 2 ( 360 .1 + 125348 360 ) = 354.094 ≈ 354.1 x 4 = 1 2 ( x 3 + S x 3 ) = 1 2 ( 354.1 + 125348 354.1 ) = 354.045199 {\displaystyle {\begin{alignedat}{5}x_{0}&=6\cdot 10^{2}&&&&=600\\[0.3em]x_{1}&={\frac {1}{2}}\left(x_{0}+{\frac {S}{x_{0}}}\right)&&={\frac {1}{2}}\left(600{\phantom {.1}}+{\frac {125348}{600}}\right)&&=404.457\approx 400\\[0.3em]x_{2}&={\frac {1}{2}}\left(x_{1}+{\frac {S}{x_{1}}}\right)&&={\frac {1}{2}}\left(400{\phantom {.1}}+{\frac {125348}{400}}\right)&&=356.685\approx 360\\[0.3em]x_{3}&={\frac {1}{2}}\left(x_{2}+{\frac {S}{x_{2}}}\right)&&={\frac {1}{2}}\left(360{\phantom {.1}}+{\frac {125348}{360}}\right)&&=354.094\approx 354.1\\[0.3em]x_{4}&={\frac {1}{2}}\left(x_{3}+{\frac {S}{x_{3}}}\right)&&={\frac {1}{2}}\left(354.1+{\frac {125348}{354.1}}\right)&&=354.045199\end{alignedat}}} Therefore 125348 ≈ 354.0452 {\displaystyle {\sqrt {\,125348\,}}\approx 354.0452} to seven significant figures. (The true value is 354.0451948551....) Notice that early iterations only needed to be computed to 1, 2 or 4 places to produce an accurate final answer. === Convergence === Suppose that x 0 > 0 a n d S > 0 . {\displaystyle \ x_{0}>0~~{\mathsf {and}}~~S>0~.} Then for any natural number n : x n > 0 . {\displaystyle \ n:x_{n}>0~.} Let the relative error in x n {\displaystyle \ x_{n}\ } be defined by ε n = x n S − 1 > − 1 {\displaystyle \ \varepsilon _{n}={\frac {~x_{n}\ }{\ {\sqrt {S~}}\ }}-1>-1\ } and thus x n = S ⋅ ( 1 + ε n ) . {\displaystyle \ x_{n}={\sqrt {S~}}\cdot \left(1+\varepsilon _{n}\right)~.} Then it can be shown that ε n + 1 = ε n 2 2 ( 1 + ε n ) ≥ 0 . {\displaystyle \ \varepsilon _{n+1}={\frac {\varepsilon _{n}^{2}}{2(1+\varepsilon _{n})}}\geq 0~.} And thus that ε n + 2 ≤ min { ε n + 1 2 2 , ε n + 1 2 } {\displaystyle \ \varepsilon _{n+2}\leq \min \left\{\ {\frac {\ \varepsilon _{n+1}^{2}\ }{2}},{\frac {\ \varepsilon _{n+1}\ }{2}}\ \right\}\ } and consequently that convergence is assured, and quadratic. ==== Worst case for convergence ==== If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows: S = 1 ; x 0 = 2 ; x 1 = 1.250 ; ε 1 = 0.250 . S = 10 ; x 0 = 2 ; x 1 = 3.500 ; ε 1 < 0.107 . S = 10 ; x 0 = 6 ; x 1 = 3.833 ; ε 1 < 0.213 . S = 100 ; x 0 = 6 ; x 1 = 11.333 ; ε 1 < 0.134 . {\displaystyle {\begin{aligned}S&=\ 1\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 1.250\ ;&\varepsilon _{1}&=\ 0.250~.\\S&=\ 10\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 3.500\ ;&\varepsilon _{1}&<\ 0.107~.\\S&=\ 10\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 3.833\ ;&\varepsilon _{1}&<\ 0.213~.\\S&=\ 100\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 11.333\ ;&\varepsilon _{1}&<\ 0.134~.\end{aligned}}} Thus in any case, ε 1 ≤ 2 − 2 . ε 2 < 2 − 5 < 10 − 1 . ε 3 < 2 − 11 < 10 − 3 . ε 4 < 2 − 23 < 10 − 6 . ε 5 < 2 − 47 < 10 − 14 . ε 6 < 2 − 95 < 10 − 28 . ε 7 < 2 − 191 < 10 − 57 . ε 8 < 2 − 383 < 10 − 115 . {\displaystyle {\begin{aligned}\varepsilon _{1}&\leq 2^{-2}.\\\varepsilon _{2}&<2^{-5}<10^{-1}~.\\\varepsilon _{3}&<2^{-11}<10^{-3}~.\\\varepsilon _{4}&<2^{-23}<10^{-6}~.\\\varepsilon _{5}&<2^{-47}<10^{-14}~.\\\varepsilon _{6}&<2^{-95}<10^{-28}~.\\\varepsilon _{7}&<2^{-191}<10^{-57}~.\\\varepsilon _{8}&<2^{-383}<10^{-115}~.\end{aligned}}} Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the x n {\displaystyle \ x_{n}\ } being calculated, to avoid significant round-off error. == Bakhshali method == This method for finding an approximation to a square root was described in an Ancient Indian manuscript, called the Bakhshali manuscript. It is algebraically equivalent to two iterations of Heron's method and thus quartically convergent, meaning that the number of correct digits of the approximation roughly quadruples with each iteration. The original presentation, using modern notation, is as follows: To calculate S {\displaystyle {\sqrt {S}}} , let x 0 2 {\displaystyle x_{0}^{2}} be the initial approximation to S {\displaystyle S} . Then, successively iterate as: a n = S − x n 2 2 x n , x n + 1 = x n + a n , x n + 2 = x n + 1 − a n 2 2 x n + 1 . {\displaystyle {\begin{aligned}a_{n}&={\frac {S-x_{n}^{2}}{2x_{n}}},\\x_{n+1}&=x_{n}+a_{n},\\x_{n+2}&=x_{n+1}-{\frac {a_{n}^{2}}{2x_{n+1}}}.\end{aligned}}} The values x n + 1 {\displaystyle x_{n+1}} and x n + 2 {\displaystyle x_{n+2}} are exactly the same as those computed by Heron's method. To see this, the second Heron's method step would compute x n + 2 = x n + 1 2 + S 2 x n + 1 = x n + 1 + S − x n + 1 2 2 x n + 1 {\displaystyle x_{n+2}={\frac {x_{n+1}^{2}+S}{2x_{n+1}}}=x_{n+1}+{\frac {S-x_{n+1}^{2}}{2x_{n+1}}}} and we can use the definitions of x n + 1 {\displaystyle x_{n+1}} and a n {\displaystyle a_{n}} to rearrange the numerator into: S − x n + 1 2 = S − ( x n + a n ) 2 = S − x n 2 − 2 x n a n − a n 2 = S − x n 2 − ( S − x n 2 ) − a n 2 = − a n 2 . {\displaystyle {\begin{aligned}S-x_{n+1}^{2}&=S-(x_{n}+a_{n})^{2}\\&=S-x_{n}^{2}-2x_{n}a_{n}-a_{n}^{2}\\&=S-x_{n}^{2}-(S-x_{n}^{2})-a_{n}^{2}\\&=-a_{n}^{2}.\end{aligned}}} This can be used to construct a rational approximation to the square root by beginning with an integer. If x 0 = N {\displaystyle x_{0}=N} is an integer chosen so N 2 {\displaystyle N^{2}} is close to S {\displaystyle S} , and d = S − N 2 {\displaystyle d=S-N^{2}} is the difference whose absolute value is minimized, then the first iteration can be written as: S ≈ N + d 2 N − d 2 8 N 3 + 4 N d = 8 N 4 + 8 N 2 d + d 2 8 N 3 + 4 N d = N 4 + 6 N 2 S + S 2 4 N 3 + 4 N S = N 2 ( N 2 + 6 S ) + S 2 4 N ( N 2 + S ) . {\displaystyle {\sqrt {S}}\approx N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}+4Nd}}={\frac {8N^{4}+8N^{2}d+d^{2}}{8N^{3}+4Nd}}={\frac {N^{4}+6N^{2}S+S^{2}}{4N^{3}+4NS}}={\frac {N^{2}(N^{2}+6S)+S^{2}}{4N(N^{2}+S)}}.} The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots. One might think the second half of the Bakhshali method could be used as a simpler form of Heron's iteration and used repeatedly, e.g. a n + 1 = − a n 2 2 x n + 1 , x n + 2 = x n + 1 + a n + 1 , a n + 2 = − a n + 1 2 2 x n + 2 , x n + 3 = x n + 2 + a n + 2 , etc. {\displaystyle {\begin{aligned}a_{n+1}&={\frac {-a_{n}^{2}}{2x_{n+1}}},&x_{n+2}&=x_{n+1}+a_{n+1},\\a_{n+2}&={\frac {-a_{n+1}^{2}}{2x_{n+2}}},&x_{n+3}&=x_{n+2}+a_{n+2},{\text{ etc.}}\end{aligned}}} however, this is numerically unstable. Without any reference to the original input value S {\displaystyle S} , the accuracy is limited by that of the original computation of a n {\displaystyle a_{n}} , and that rapidly becomes inadequate. === Example === Using the same example S = 125348 {\displaystyle S=125348} as in the Heron's method example, the first iteration gives x 0 = 600 a 0 = 125348 − 600 2 2 × 600 = − 195.5433 ≈ − 200 x 1 = 600 + ( − 200 ) = − 400 x 2 = 400 − ( − 200 ) 2 2 × 400 = − 350 {\displaystyle {\begin{alignedat}{3}x_{0}&=600\\[1ex]a_{0}&={\frac {125348-600^{2}}{2\times 600}}&&=-195.5433\approx -200\\[1ex]x_{1}&=600+(-200)&&={\phantom {-}}400\\[1ex]x_{2}&=400-{\frac {(-200)^{2}}{2\times 400}}&&={\phantom {-}}350\end{alignedat}}} Likewise the second iteration gives a 2 = 125348 − 350 2 2 × 350 = 00 4.06857 x 3 = 350 + 4.06857 = 354.06857 x 4 = 354.06857 − 4.06857 2 2 × 354.06857 = 354.045194 {\displaystyle {\begin{alignedat}{3}a_{2}&={\frac {125348-350^{2}}{2\times 350}}&&={\phantom {00}}4.06857\\[1ex]x_{3}&=350+4.06857&&=354.06857\\[1ex]x_{4}&=354.06857-{\frac {4.06857^{2}}{2\times 354.06857}}&&=354.045194\end{alignedat}}} Unlike in Heron's method, x 3 {\displaystyle x_{3}} must be computed to 8 digits because the formula for x 4 {\displaystyle x_{4}} does not correct any error in x 3 {\displaystyle x_{3}} . == Digit-by-digit calculation == This is a method to find each digit of the square root in a sequence. This method is based on the binomial theorem and basically an inverse algorithm solving ( x + y ) 2 = x 2 + 2 x y + y 2 {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}} . It is slower than the Babylonian method, but it has several advantages: It can be easier for manual calculations. Every digit of the root found is known to be correct, i.e., it does not have to be changed later. If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number. The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. Disadvantages are: It becomes unmanageable for higher roots. It does not tolerate inaccurate guesses or sub-calculations; such errors lead to every following digit of the result being wrong, unlike with Newton's method, which self-corrects any approximation errors. While digit-by-digit calculation is efficient enough on paper, it is much too expensive for software implementations. Each iteration involves larger numbers, requiring more memory, but only advances the answer by one correct digit. Thus algorithm takes more time for each additional digit. Napier's bones include an aid for the execution of this algorithm. The shifting nth root algorithm is a generalization of this method. === Basic principle === First, consider the case of finding the square root of a number S, that is the square of a base-10 two-digit number XY, where X is the tens digit and Y is the units digit. Specifically: S = ( 10 X + Y ) 2 = 100 X 2 + 20 X Y + Y 2 . {\displaystyle S=\left(10X+Y\right)^{2}=100X^{2}+20XY+Y^{2}.} S will consist of 3 or 4 decimal digits. Now to start the digit-by-digit algorithm, we split the digits of S in two groups of two digits, starting from the right. This means that the first group will be of 1 or 2 digits. Then we determine the value of X as the largest digit such that X2 is less than or equal to the first group. We then compute the difference between the first group and X2 and start the second iteration by concatenating the second group to it. This is equivalent to subtracting 100 X 2 {\displaystyle 100X^{2}} from S, and we're left with S ′ = 20 X Y + Y 2 {\displaystyle S'=20XY+Y^{2}} . We divide S' by 10, then divide it by 2X and keep the integer part to try and guess Y. We concatenate 2X with the tentative Y and multiply it by Y. If our guess is correct, this is equivalent to computing: ( 10 ( 2 X ) + Y ) Y = 20 X Y + Y 2 = S ′ , {\displaystyle (10(2X)+Y)Y=20XY+Y^{2}=S',} and so the remainder, that is the difference between S' and the result, is zero; if the result is higher than S' , we lower our guess by 1 and try again until the remainder is 0. Since this is a simple case where the answer is a perfect square root XY, the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of S by expressing it as a sum of n positive numbers such that S = ( a 1 + a 2 + a 3 + ⋯ + a n ) 2 . {\displaystyle S=\left(a_{1}+a_{2}+a_{3}+\dots +a_{n}\right)^{2}.} By repeatedly applying the basic identity ( x + y ) 2 = x 2 + 2 x y + y 2 , {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2},} the right-hand-side term can be expanded as ( a 1 + a 2 + a 3 + ⋯ + a n ) 2 = a 1 2 + 2 a 1 a 2 + a 2 2 + 2 ( a 1 + a 2 ) a 3 + a 3 2 + ⋯ + a n − 1 2 + 2 ( ∑ i = 1 n − 1 a i ) a n + a n 2 = a 1 2 + [ 2 a 1 + a 2 ] a 2 + [ 2 ( a 1 + a 2 ) + a 3 ] a 3 + ⋯ + [ 2 ( ∑ i = 1 n − 1 a i ) + a n ] a n . {\displaystyle {\begin{aligned}&(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}\\=&\,a_{1}^{2}+2a_{1}a_{2}+a_{2}^{2}+2(a_{1}+a_{2})a_{3}+a_{3}^{2}+\dots +a_{n-1}^{2}+2\left(\sum _{i=1}^{n-1}a_{i}\right)a_{n}+a_{n}^{2}\\=&\,a_{1}^{2}+[2a_{1}+a_{2}]a_{2}+[2(a_{1}+a_{2})+a_{3}]a_{3}+\dots +\left[2\left(\sum _{i=1}^{n-1}a_{i}\right)+a_{n}\right]a_{n}.\end{aligned}}} This expression allows us to find the square root by sequentially guessing the values of a i {\displaystyle a_{i}} s. Suppose that the numbers a 1 , … , a m − 1 {\displaystyle a_{1},\ldots ,a_{m-1}} have already been guessed, then the m-th term of the right-hand-side of the above summation is given by Y m = [ 2 P m − 1 + a m ] a m , {\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\right]a_{m},} where P m − 1 = ∑ i = 1 m − 1 a i {\textstyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}} is the approximate square root found so far. Now each new guess a m {\displaystyle a_{m}} should satisfy the recursion X m = X m − 1 − Y m , {\displaystyle X_{m}=X_{m-1}-Y_{m},} where X m {\displaystyle X_{m}} is the sum of all the terms after Y m {\displaystyle Y_{m}} , i.e. the remainder, such that X m ≥ 0 {\displaystyle X_{m}\geq 0} for all 1 ≤ m ≤ n , {\displaystyle 1\leq m\leq n,} with initialization X 0 = S . {\displaystyle X_{0}=S.} When X n = 0 , {\displaystyle X_{n}=0,} the exact square root has been found; if not, then the sum of the a i {\displaystyle a_{i}} s gives a suitable approximation of the square root, with X n {\displaystyle X_{n}} being the approximation error. For example, in the decimal number system we have S = ( a 1 ⋅ 10 n − 1 + a 2 ⋅ 10 n − 2 + ⋯ + a n − 1 ⋅ 10 + a n ) 2 , {\displaystyle S=\left(a_{1}\cdot 10^{n-1}+a_{2}\cdot 10^{n-2}+\cdots +a_{n-1}\cdot 10+a_{n}\right)^{2},} where 10 n − i {\displaystyle 10^{n-i}} are place holders and the coefficients a i ∈ { 0 , 1 , 2 , … , 9 } {\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}} . At any m-th stage of the square root calculation, the approximate root found so far, P m − 1 {\displaystyle P_{m-1}} and the summation term Y m {\displaystyle Y_{m}} are given by P m − 1 = ∑ i = 1 m − 1 a i ⋅ 10 n − i = 10 n − m + 1 ∑ i = 1 m − 1 a i ⋅ 10 m − i − 1 , {\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}\cdot 10^{n-i}=10^{n-m+1}\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1},} Y m = [ 2 P m − 1 + a m ⋅ 10 n − m ] a m ⋅ 10 n − m = [ 20 ∑ i = 1 m − 1 a i ⋅ 10 m − i − 1 + a m ] a m ⋅ 10 2 ( n − m ) . {\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\cdot 10^{n-m}\right]a_{m}\cdot 10^{n-m}=\left[20\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1}+a_{m}\right]a_{m}\cdot 10^{2(n-m)}.} Here since the place value of Y m {\displaystyle Y_{m}} is an even power of 10, we only need to work with the pair of most significant digits of the remainder X m − 1 {\displaystyle X_{m-1}} , whose first term is Y m {\displaystyle Y_{m}} , at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of a i {\displaystyle a_{i}} is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value of Y m {\displaystyle Y_{m}} is either Y m = 0 {\displaystyle Y_{m}=0} for a m = 0 {\displaystyle a_{m}=0} or Y m = 2 P m − 1 + 1 {\displaystyle Y_{m}=2P_{m-1}+1} for a m = 1 {\displaystyle a_{m}=1} . The fact that we have only two possible options for a m {\displaystyle a_{m}} also makes the process of deciding the value of a m {\displaystyle a_{m}} at m-th stage of calculation easier. This is because we only need to check if Y m ≤ X m − 1 {\displaystyle Y_{m}\leq X_{m-1}} for a m = 1. {\displaystyle a_{m}=1.} If this condition is satisfied, then we take a m = 1 {\displaystyle a_{m}=1} ; if not then a m = 0. {\displaystyle a_{m}=0.} Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation. === Decimal (base 10) === Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c. Find p, y and x, as follows: Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0.) Determine the greatest digit x such that x ( 20 p + x ) ≤ c {\displaystyle x(20p+x)\leq c} . We will use a new variable y = x(20p + x). Note: 20p + x is simply twice p, with the digit x appended to the right. Note: x can be found by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary. Place the digit x {\displaystyle x} as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x. Subtract y from c to form a new remainder. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. ==== Examples ==== Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x=1 01 y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x=2 00 44 y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x=3 07 29 y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x=4 98 56 y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer=12.34 === Binary numeral system (base 2) === This section uses the formalism from the digit-by-digit calculation section above, with the slight variation that we let N 2 = ( a n + ⋯ + a 0 ) 2 {\displaystyle N^{2}=(a_{n}+\dotsb +a_{0})^{2}} , with each a m = 2 m {\displaystyle a_{m}=2^{m}} or a m = 0 {\displaystyle a_{m}=0} . We iterate all 2 m {\displaystyle 2^{m}} , from 2 n {\displaystyle 2^{n}} down to 2 0 {\displaystyle 2^{0}} , and build up an approximate solution P m = a n + a n − 1 + … + a m {\displaystyle P_{m}=a_{n}+a_{n-1}+\ldots +a_{m}} , the sum of all a i {\displaystyle a_{i}} for which we have determined the value. To determine if a m {\displaystyle a_{m}} equals 2 m {\displaystyle 2^{m}} or 0 {\displaystyle 0} , we let P m = P m + 1 + 2 m {\displaystyle P_{m}=P_{m+1}+2^{m}} . If P m 2 ≤ N 2 {\displaystyle P_{m}^{2}\leq N^{2}} (i.e. the square of our approximate solution including 2 m {\displaystyle 2^{m}} does not exceed the target square) then a m = 2 m {\displaystyle a_{m}=2^{m}} , otherwise a m = 0 {\displaystyle a_{m}=0} and P m = P m + 1 {\displaystyle P_{m}=P_{m+1}} . To avoid squaring P m {\displaystyle P_{m}} in each step, we store the difference X m = N 2 − P m 2 {\displaystyle X_{m}=N^{2}-P_{m}^{2}} and incrementally update it by setting X m = X m + 1 − Y m {\displaystyle X_{m}=X_{m+1}-Y_{m}} with Y m = P m 2 − P m + 1 2 = 2 P m + 1 a m + a m 2 {\displaystyle Y_{m}=P_{m}^{2}-P_{m+1}^{2}=2P_{m+1}a_{m}+a_{m}^{2}} . Initially, we set a n = P n = 2 n {\displaystyle a_{n}=P_{n}=2^{n}} for the largest n {\displaystyle n} with ( 2 n ) 2 = 4 n ≤ N 2 {\displaystyle (2^{n})^{2}=4^{n}\leq N^{2}} . As an extra optimization, we store P m + 1 2 m + 1 {\displaystyle P_{m+1}2^{m+1}} and ( 2 m ) 2 {\displaystyle (2^{m})^{2}} , the two terms of Y m {\displaystyle Y_{m}} in case that a m {\displaystyle a_{m}} is nonzero, in separate variables c m {\displaystyle c_{m}} , d m {\displaystyle d_{m}} : c m = P m + 1 2 m + 1 {\displaystyle c_{m}=P_{m+1}2^{m+1}} d m = ( 2 m ) 2 {\displaystyle d_{m}=(2^{m})^{2}} Y m = { c m + d m if a m = 2 m 0 if a m = 0 {\displaystyle Y_{m}={\begin{cases}c_{m}+d_{m}&{\text{if }}a_{m}=2^{m}\\0&{\text{if }}a_{m}=0\end{cases}}} c m {\displaystyle c_{m}} and d m {\displaystyle d_{m}} can be efficiently updated in each step: c m − 1 = P m 2 m = ( P m + 1 + a m ) 2 m = P m + 1 2 m + a m 2 m = { c m / 2 + d m if a m = 2 m c m / 2 if a m = 0 {\displaystyle c_{m-1}=P_{m}2^{m}=(P_{m+1}+a_{m})2^{m}=P_{m+1}2^{m}+a_{m}2^{m}={\begin{cases}c_{m}/2+d_{m}&{\text{if }}a_{m}=2^{m}\\c_{m}/2&{\text{if }}a_{m}=0\end{cases}}} d m − 1 = d m 4 {\displaystyle d_{m-1}={\frac {d_{m}}{4}}} Note that: c − 1 = P 0 2 0 = P 0 = N , {\displaystyle c_{-1}=P_{0}2^{0}=P_{0}=N,} which is the final result returned in the function below. An implementation of this algorithm in C: Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect trading more storage space for reduced run time. == Exponential identity == Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms ( ln ⁡ x n = n ln ⁡ x {\displaystyle \ln x^{n}=n\ln x} ) and exponentials ( e ln ⁡ x = x {\displaystyle e^{\ln x}=x} ): S = e 1 2 ln ⁡ S . {\displaystyle {\sqrt {S}}=e^{{\frac {1}{2}}\ln S}.} The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules. == A two-variable iterative method == This method is applicable for finding the square root of 0 < S < 3 {\displaystyle 0<S<3\,\!} and converges best for S ≈ 1 {\displaystyle S\approx 1} . This, however, is no real limitation for a computer-based calculation, as in base 2 floating-point and fixed-point representations, it is trivial to multiply S {\displaystyle S\,\!} by an integer power of 4, and therefore S {\displaystyle {\sqrt {S}}} by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, S {\displaystyle S\,\!} can be moved to the range 1 2 ≤ S < 2 {\textstyle {\tfrac {1}{2}}\leq S<2} . Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is a 0 = S c 0 = S − 1 {\displaystyle {\begin{aligned}a_{0}&=S\\c_{0}&=S-1\end{aligned}}} while the iterative steps read a n + 1 = a n − a n c n / 2 c n + 1 = c n 2 ( c n − 3 ) / 4 {\displaystyle {\begin{aligned}a_{n+1}&=a_{n}-a_{n}c_{n}/2\\c_{n+1}&=c_{n}^{2}(c_{n}-3)/4\end{aligned}}} Then, a n → S {\displaystyle a_{n}\to {\sqrt {S}}} (while c n → 0 {\displaystyle c_{n}\to 0} ). The convergence of c n {\displaystyle c_{n}\,\!} , and therefore also of a n {\displaystyle a_{n}\,\!} , is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition of c n {\displaystyle c_{n}} as 1 + c n + 1 = ( 1 + c n ) ( 1 − 1 2 c n ) 2 . {\displaystyle 1+c_{n+1}=(1+c_{n})(1-{\tfrac {1}{2}}c_{n})^{2}.} Then it is straightforward to prove by induction that S ( 1 + c n ) = a n 2 {\displaystyle S(1+c_{n})=a_{n}^{2}} and therefore the convergence of a n {\displaystyle a_{n}\,\!} to the desired result S {\displaystyle {\sqrt {S}}} is ensured by the convergence of c n {\displaystyle c_{n}\,\!} to 0, which in turn follows from − 1 < c 0 < 2 {\displaystyle -1<c_{0}<2\,\!} . This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill for use on EDSAC, one of the first electronic computers. The method was later generalized, allowing the computation of non-square roots. == Iterative methods for reciprocal square roots == The following are iterative methods for finding the reciprocal square root of S which is 1 / S {\displaystyle 1/{\sqrt {S}}} . Once it has been found, find S {\displaystyle {\sqrt {S}}} by simple multiplication: S = S ⋅ ( 1 / S ) {\displaystyle {\sqrt {S}}=S\cdot (1/{\sqrt {S}})} . These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. Applying Newton's method to the equation ( 1 / x 2 ) − S = 0 {\displaystyle (1/x^{2})-S=0} produces a method that converges quadratically using three multiplications per step: x n + 1 = x n 2 ⋅ ( 3 − S ⋅ x n 2 ) = x n ⋅ ( 3 2 − S 2 ⋅ x n 2 ) . {\displaystyle x_{n+1}={\frac {x_{n}}{2}}\cdot (3-S\cdot x_{n}^{2})=x_{n}\cdot \left({\frac {3}{2}}-{\frac {S}{2}}\cdot x_{n}^{2}\right).} Another iteration is obtained by Halley's method, which is the Householder's method of order two. This converges cubically, but involves five multiplications per iteration: y n = S ⋅ x n 2 , {\displaystyle y_{n}=S\cdot x_{n}^{2},} and x n + 1 = x n 8 ⋅ ( 15 − y n ⋅ ( 10 − 3 ⋅ y n ) ) = x n ⋅ ( 15 8 − y n ⋅ ( 10 8 − 3 8 ⋅ y n ) ) . {\displaystyle x_{n+1}={\frac {x_{n}}{8}}\cdot (15-y_{n}\cdot (10-3\cdot y_{n}))=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\frac {10}{8}}-{\frac {3}{8}}\cdot y_{n}\right)\right).} If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing 3 / 8 S {\textstyle {\sqrt {3/8}}S} and adjusting all the other constants to compensate: y n = 3 8 S ⋅ x n 2 , {\displaystyle y_{n}={\sqrt {\frac {3}{8}}}S\cdot x_{n}^{2},} and x n + 1 = x n ⋅ ( 15 8 − y n ⋅ ( 25 6 − y n ) ) . {\displaystyle x_{n+1}=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\sqrt {\frac {25}{6}}}-y_{n}\right)\right).} === Goldschmidt's algorithm === Goldschmidt's algorithm is an extension of Goldschmidt division, named after Robert Elliot Goldschmidt, which can be used to calculate square roots. Some computers use Goldschmidt's algorithm to simultaneously calculate S {\displaystyle {\sqrt {S}}} and 1 / S {\displaystyle 1/{\sqrt {S}}} . Goldschmidt's algorithm finds S {\displaystyle {\sqrt {S}}} faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating-point unit or two independent floating-point units. The first way of writing Goldschmidt's algorithm begins b 0 = S {\displaystyle b_{0}=S} Y 0 ≈ 1 / S {\displaystyle Y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup) y 0 = Y 0 {\displaystyle y_{0}=Y_{0}} x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}} and iterates b n + 1 = b n Y n 2 Y n + 1 = 1 2 ( 3 − b n + 1 ) x n + 1 = x n Y n + 1 y n + 1 = y n Y n + 1 {\displaystyle {\begin{aligned}b_{n+1}&=b_{n}Y_{n}^{2}\\Y_{n+1}&={\tfrac {1}{2}}(3-b_{n+1})\\x_{n+1}&=x_{n}Y_{n+1}\\y_{n+1}&=y_{n}Y_{n+1}\end{aligned}}} until b i {\displaystyle b_{i}} is sufficiently close to 1, or a fixed number of iterations. The iterations converge to lim n → ∞ x n = S , {\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},} and lim n → ∞ y n = 1 / S . {\displaystyle \lim _{n\to \infty }y_{n}=1/{\sqrt {S}}.} Note that it is possible to omit either x n {\displaystyle x_{n}} and y n {\displaystyle y_{n}} from the computation, and if both are desired then x n = S y n {\displaystyle x_{n}=Sy_{n}} may be used at the end rather than computing it through in each iteration. A second form, using fused multiply-add operations, begins y 0 ≈ 1 / S {\displaystyle y_{0}\approx 1/{\sqrt {S}}} (typically using a table lookup) x 0 = S y 0 {\displaystyle x_{0}=Sy_{0}} h 0 = 1 2 y 0 {\displaystyle h_{0}={\tfrac {1}{2}}y_{0}} and iterates r n = 0.5 − x n h n x n + 1 = x n + x n r n h n + 1 = h n + h n r n {\displaystyle {\begin{aligned}r_{n}&=0.5-x_{n}h_{n}\\x_{n+1}&=x_{n}+x_{n}r_{n}\\h_{n+1}&=h_{n}+h_{n}r_{n}\end{aligned}}} until r i {\displaystyle r_{i}} is sufficiently close to 0, or a fixed number of iterations. This converges to lim n → ∞ x n = S , {\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},} and lim n → ∞ 2 h n = 1 / S . {\displaystyle \lim _{n\to \infty }2h_{n}=1/{\sqrt {S}}.} == Taylor series == If N is an approximation to S {\displaystyle {\sqrt {S}}} , a better approximation can be found by using the Taylor series of the square root function: N 2 + d = N ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! ( 1 − 2 n ) n ! 2 4 n d n N 2 n = N ( 1 + d 2 N 2 − d 2 8 N 4 + d 3 16 N 6 − 5 d 4 128 N 8 + ⋯ ) {\displaystyle {\sqrt {N^{2}+d}}=N\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)n!^{2}4^{n}}}{\frac {d^{n}}{N^{2n}}}=N\left(1+{\frac {d}{2N^{2}}}-{\frac {d^{2}}{8N^{4}}}+{\frac {d^{3}}{16N^{6}}}-{\frac {5d^{4}}{128N^{8}}}+\cdots \right)} As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that | d | N 2 {\displaystyle {\frac {|d|}{N^{2}}}\,} is as small as possible. == Continued fraction expansion == The continued fraction representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. Quadratic irrationals (numbers of the form a + b c {\displaystyle {\frac {a+{\sqrt {b}}}{c}}} , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write S = a 2 + r . {\displaystyle S=a^{2}+r.} Since we have S − a 2 = ( S + a ) ( S − a ) = r {\displaystyle S-a^{2}=({\sqrt {S}}+a)({\sqrt {S}}-a)=r} , we can express the square root of S as S = a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+{\sqrt {S}}}}.} By applying this expression for S {\displaystyle {\sqrt {S}}} to the denominator term of the fraction, we have: S = a + r a + ( a + r a + S ) = a + r 2 a + r a + S . {\displaystyle {\sqrt {S}}=a+{\frac {r}{a+(a+{\frac {r}{a+{\sqrt {S}}}})}}=a+{\frac {r}{2a+{\frac {r}{a+{\sqrt {S}}}}}}.} Proceeding this way, we get a generalized continued fraction for the square root as S = a + r 2 a + r 2 a + r 2 a + ⋱ {\displaystyle {\sqrt {S}}=a+{\cfrac {r}{2a+{\cfrac {r}{2a+{\cfrac {r}{2a+\ddots }}}}}}} The first step to evaluating such a fraction to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form, r {\displaystyle r} is 1 and for √2, a {\displaystyle a} is 1, so the numerical continued fraction for 3 denominators is: 2 ≈ 1 + 1 2 + 1 2 + 1 2 {\displaystyle {\sqrt {2}}\approx 1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}} Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators): 1 + 1 2 + 1 2 + 1 2 = 1 + 1 2 + 1 5 2 = 1 + 1 2 + 2 5 = 1 + 1 12 5 = 1 + 5 12 = 17 12 {\displaystyle {\begin{aligned}1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}&=1+{\cfrac {1}{2+{\cfrac {1}{\frac {5}{2}}}}}\\&=1+{\cfrac {1}{2+{\cfrac {2}{5}}}}=1+{\cfrac {1}{\frac {12}{5}}}\\&=1+{\cfrac {5}{12}}={\frac {17}{12}}\end{aligned}}} Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root: 17 ÷ 12 = 1.42 {\displaystyle 17\div 12=1.42} rounded to three digits of precision. The actual value of √2 is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction 41 29 = 1.4137 {\displaystyle {\frac {41}{29}}=1.4137} , good to almost 4 digits of precision, etc. The following are examples of square roots, their simple continued fractions, and their first terms — called convergents — up to and including denominator 99: In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction — e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to √2 as 99/70. == Approximations that depend on the floating point representation == A number is represented in a floating point format as m × b p {\displaystyle m\times b^{p}} which is also called scientific notation. Its square root is m × b p / 2 {\displaystyle {\sqrt {m}}\times b^{p/2}} and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just b p / 2 {\displaystyle b^{p/2}} is good to an order of magnitude. Next, recognise that some powers, p, will be odd, thus for 3141.59 = 3.14159×103 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159×102 so that the square root will be √31.4159×101. If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1 an odd power so the adjustment is to 10.1 × 2−2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is log 2 ⁡ ( m × 2 p ) = p + log 2 ⁡ ( m ) {\displaystyle \log _{2}(m\times 2^{p})=p+\log _{2}(m)} So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by 2 − 23 {\displaystyle 2^{-23}} , and removing a bias of 127, i.e. x int ⋅ 2 − 23 − 127 ≈ log 2 ⁡ ( x ) . {\displaystyle x_{\text{int}}\cdot 2^{-23}-127\approx \log _{2}(x).} For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent 1065353216 = 127 ⋅ 2 23 {\displaystyle 1065353216=127\cdot 2^{23}} if taken as an integer. Using the formula above you get 1065353216 ⋅ 2 − 23 − 127 = 0 {\displaystyle 1065353216\cdot 2^{-23}-127=0} , as expected from log 2 ⁡ ( 1.0 ) {\displaystyle \log _{2}(1.0)} . In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. The exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume b {\displaystyle b} is the exponent bias and n {\displaystyle n} is the number of explicitly stored bits in the mantissa and then show that ( ( 1 2 ( x int / 2 n − b ) ) + b ) ⋅ 2 n = 1 2 ( x int − 2 n ) + ( 1 2 ( b + 1 ) ) ⋅ 2 n . {\displaystyle \left(\left({\tfrac {1}{2}}\left(x_{\text{int}}/2^{n}-b\right)\right)+b\right)\cdot 2^{n}={\tfrac {1}{2}}\left(x_{\text{int}}-2^{n}\right)+\left({\tfrac {1}{2}}\left(b+1\right)\right)\cdot 2^{n}.} The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as where a is a bias for adjusting the approximation errors. For example, with a = 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). With a = −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess for Newton's method to the equation ( 1 / x 2 ) − S = 0 {\displaystyle (1/x^{2})-S=0} , then the reciprocal form shown in the following section is preferred. === Reciprocal of the square root === A variant of the above routine is included below, which can be used to compute the reciprocal of the square root, i.e., x − 1 / 2 {\displaystyle x^{-1/2}} instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line. In computer graphics it is a very efficient way to normalize a vector. Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration. == Negative or complex square == If S < 0, then its principal square root is S = | S | i . {\displaystyle {\sqrt {S}}={\sqrt {\vert S\vert }}\,\,i\,.} If S = a+bi where a and b are real and b ≠ 0, then its principal square root is S = | S | + a 2 + sgn ⁡ ( b ) | S | − a 2 i . {\displaystyle {\sqrt {S}}={\sqrt {\frac {\vert S\vert +a}{2}}}\,+\,\operatorname {sgn}(b){\sqrt {\frac {\vert S\vert -a}{2}}}\,\,i\,.} This can be verified by squaring the root. Here | S | = a 2 + b 2 {\displaystyle \vert S\vert ={\sqrt {a^{2}+b^{2}}}} is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part. == See also == Alpha max plus beta min algorithm nth root algorithm Fast inverse square root == Notes == == References == == Bibliography == == External links == Weisstein, Eric W. "Square root algorithms". MathWorld. Square roots by subtraction Integer Square Root Algorithm by Andrija Radović Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (May 1977) : page 22 Calculator to learn the square root
Wikipedia/Methods_of_computing_square_roots
In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name. The algorithm is second in the class of Householder's methods, after Newton's method. Like the latter, it iteratively produces a sequence of approximations to the root; their rate of convergence to the root is cubic. Multidimensional versions of this method exist. Halley's method exactly finds the roots of a linear-over-linear Padé approximation to the function, in contrast to Newton's method or the Secant method which approximate the function linearly, or Muller's method which approximates the function quadratically. There is also Halley's irrational method, described below. == Method == Halley's method is a numerical algorithm for solving the nonlinear equation  f (x) = 0 . In this case, the function f has to be a function of one real variable. The method consists of a sequence of iterations: x n + 1 = x n − f ( x n ) f ′ ( x n ) [ f ′ ( x n ) ] 2 − 1 2 f ( x n ) f ″ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {\ f(x_{n})\ f'(x_{n})\ }{\ \left[\ f'(x_{n})\ \right]^{2}-{\tfrac {1}{2}}\ f(x_{n})\ f''(x_{n})\ }}} beginning with an initial guess x0. If f is a three times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy: | x n + 1 − a | ≤ K ⋅ | x n − a | 3 , for some K > 0 . {\displaystyle |x_{n+1}-a|\leq K\cdot {|x_{n}-a|}^{3},\quad {\text{ for some }}\quad K>0~.} This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence is cubic. The following alternative formulation shows the similarity between Halley's method and Newton's method. The ratio f ( x n ) / f ′ ( x n ) {\displaystyle \ f(x_{n})/f'(x_{n})\ } only needs to be computed once, and this form is particularly useful when the other ratio, f ″ ( x n ) / f ′ ( x n ) , {\displaystyle \ f''(x_{n})/f'(x_{n})\ ,} can be reduced to a simpler form: x n + 1 = x n − f ( x n ) f ′ ( x n ) − f ( x n ) f ′ ( x n ) f ″ ( x n ) 2 = x n − f ( x n ) f ′ ( x n ) [ 1 − 1 2 ⋅ f ( x n ) f ′ ( x n ) ⋅ f ″ ( x n ) f ′ ( x n ) ] − 1 . {\displaystyle x_{n+1}\ =\ x_{n}-{\frac {f(x_{n})}{\ f'(x_{n})-{\frac {f(x_{n})}{\ f'(x_{n})\ }}{\frac {\ f''(x_{n})\ }{2}}\ }}\ =\ x_{n}-{\frac {f(x_{n})}{\ f'(x_{n})\ }}\left[\ 1\ -\ {\frac {1}{2}}\cdot {\frac {f(x_{n})}{\ f'(x_{n})\ }}\cdot {\frac {\ f''(x_{n})\ }{f'(x_{n})}}\ \right]^{-1}~.} When the second derivative, f ″ ( x n ) , {\displaystyle \ f''(x_{n})\ ,} is very close to zero, the Halley's method iteration is almost the same as the Newton's method iteration. == Motivation == When deriving Newton's method, a proof starts with the approximation 0 = f ( x n + 1 ) ≈ f ( x n ) + f ′ ( x n ) ( x n + 1 − x n ) {\displaystyle 0=f(x_{n+1})\approx f(x_{n})+f'(x_{n})(x_{n+1}-x_{n})} to compute x n + 1 − x n = − f ( x n ) f ′ ( x n ) . {\displaystyle x_{n+1}-x_{n}=-{\frac {f(x_{n})}{f'(x_{n})}}\,.} Similarly for Halley's method, a proof starts with 0 = f ( x n + 1 ) ≈ f ( x n ) + f ′ ( x n ) ( x n + 1 − x n ) + f ″ ( x n ) 2 ( x n + 1 − x n ) 2 . {\displaystyle 0=f(x_{n+1})\approx f(x_{n})+f'(x_{n})(x_{n+1}-x_{n})+{\frac {f''(x_{n})}{2}}(x_{n+1}-x_{n})^{2}\,.} For Halley's rational method, this is rearranged to give x n + 1 − x n = − f ( x n ) f ′ ( x n ) + f ″ ( x n ) 2 ( x n + 1 − x n ) {\displaystyle x_{n+1}-x_{n}=-{\frac {f(x_{n})}{f'(x_{n})+{\frac {f''(x_{n})}{2}}(x_{n+1}-x_{n})}}\,} where xn+1 − xn appears on both sides of the equation. Substituting in the Newtwon's method value for xn+1 − xn into the right-hand side of this last formula gives the formula for Halley's method, x n + 1 = x n − f ( x n ) f ′ ( x n ) − f ″ ( x n ) f ( x n ) 2 f ′ ( x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})-{\frac {f''(x_{n})f(x_{n})}{2f'(x_{n})}}}}\,.} Also see the motivation and proofs for the more general class of Householder's methods. == Cubic convergence == Suppose a is a root of f but not of its derivative. And suppose that the third derivative of f exists and is continuous in a neighborhood of a and xn is in that neighborhood. Then Taylor's theorem implies: 0 = f ( a ) = f ( x n ) + f ′ ( x n ) ( a − x n ) + f ″ ( x n ) 2 ( a − x n ) 2 + f ‴ ( ξ ) 6 ( a − x n ) 3 {\displaystyle 0=f(a)=f(x_{n})+f'(x_{n})(a-x_{n})+{\frac {f''(x_{n})}{2}}(a-x_{n})^{2}+{\frac {f'''(\xi )}{6}}(a-x_{n})^{3}} and also 0 = f ( a ) = f ( x n ) + f ′ ( x n ) ( a − x n ) + f ″ ( η ) 2 ( a − x n ) 2 , {\displaystyle 0=f(a)=f(x_{n})+f'(x_{n})(a-x_{n})+{\frac {f''(\eta )}{2}}(a-x_{n})^{2},} where ξ and η are numbers lying between a and xn. Multiply the first equation by 2 f ′ ( x n ) {\displaystyle 2f'(x_{n})} and subtract from it the second equation times f ″ ( x n ) ( a − x n ) {\displaystyle f''(x_{n})(a-x_{n})} to give: 0 = 2 f ( x n ) f ′ ( x n ) + 2 [ f ′ ( x n ) ] 2 ( a − x n ) + f ′ ( x n ) f ″ ( x n ) ( a − x n ) 2 + f ′ ( x n ) f ‴ ( ξ ) 3 ( a − x n ) 3 − f ( x n ) f ″ ( x n ) ( a − x n ) − f ′ ( x n ) f ″ ( x n ) ( a − x n ) 2 − f ″ ( x n ) f ″ ( η ) 2 ( a − x n ) 3 . {\displaystyle {\begin{aligned}0&=2f(x_{n})f'(x_{n})+2[f'(x_{n})]^{2}(a-x_{n})+f'(x_{n})f''(x_{n})(a-x_{n})^{2}+{\frac {f'(x_{n})f'''(\xi )}{3}}(a-x_{n})^{3}\\&\qquad -f(x_{n})f''(x_{n})(a-x_{n})-f'(x_{n})f''(x_{n})(a-x_{n})^{2}-{\frac {f''(x_{n})f''(\eta )}{2}}(a-x_{n})^{3}.\end{aligned}}} Canceling f ′ ( x n ) f ″ ( x n ) ( a − x n ) 2 {\displaystyle f'(x_{n})f''(x_{n})(a-x_{n})^{2}} and re-organizing terms yields: 0 = 2 f ( x n ) f ′ ( x n ) + ( 2 [ f ′ ( x n ) ] 2 − f ( x n ) f ″ ( x n ) ) ( a − x n ) + ( f ′ ( x n ) f ‴ ( ξ ) 3 − f ″ ( x n ) f ″ ( η ) 2 ) ( a − x n ) 3 . {\displaystyle 0=2f(x_{n})f'(x_{n})+\left(2[f'(x_{n})]^{2}-f(x_{n})f''(x_{n})\right)(a-x_{n})+\left({\frac {f'(x_{n})f'''(\xi )}{3}}-{\frac {f''(x_{n})f''(\eta )}{2}}\right)(a-x_{n})^{3}.} Put the second term on the left side and divide through by 2 [ f ′ ( x n ) ] 2 − f ( x n ) f ″ ( x n ) {\displaystyle 2[f'(x_{n})]^{2}-f(x_{n})f''(x_{n})} to get: a − x n = − 2 f ( x n ) f ′ ( x n ) 2 [ f ′ ( x n ) ] 2 − f ( x n ) f ″ ( x n ) − 2 f ′ ( x n ) f ‴ ( ξ ) − 3 f ″ ( x n ) f ″ ( η ) 6 ( 2 [ f ′ ( x n ) ] 2 − f ( x n ) f ″ ( x n ) ) ( a − x n ) 3 . {\displaystyle a-x_{n}={\frac {-2f(x_{n})f'(x_{n})}{2[f'(x_{n})]^{2}-f(x_{n})f''(x_{n})}}-{\frac {2f'(x_{n})f'''(\xi )-3f''(x_{n})f''(\eta )}{6(2[f'(x_{n})]^{2}-f(x_{n})f''(x_{n}))}}(a-x_{n})^{3}.} Thus: a − x n + 1 = − 2 f ′ ( x n ) f ‴ ( ξ ) − 3 f ″ ( x n ) f ″ ( η ) 12 [ f ′ ( x n ) ] 2 − 6 f ( x n ) f ″ ( x n ) ( a − x n ) 3 . {\displaystyle a-x_{n+1}=-{\frac {2f'(x_{n})f'''(\xi )-3f''(x_{n})f''(\eta )}{12[f'(x_{n})]^{2}-6f(x_{n})f''(x_{n})}}(a-x_{n})^{3}.} The limit of the coefficient on the right side as xn → a is: − 2 f ′ ( a ) f ‴ ( a ) − 3 f ″ ( a ) f ″ ( a ) 12 [ f ′ ( a ) ] 2 − 6 f ( a ) f ″ ( a ) . {\displaystyle -{\frac {2f'(a)f'''(a)-3f''(a)f''(a)}{12[f'(a)]^{2}-6f(a)f''(a)}}.} If we take K to be a little larger than the absolute value of this, we can take absolute values of both sides of the formula and replace the absolute value of coefficient by its upper bound near a to get: | a − x n + 1 | ≤ K | a − x n | 3 {\displaystyle |a-x_{n+1}|\leq K|a-x_{n}|^{3}} which is what was to be proved. To summarize, Δ x i + 1 = 3 ( f ″ ) 2 − 2 f ′ f ‴ 12 ( f ′ ) 2 ( Δ x i ) 3 + O [ Δ x i ] 4 , Δ x i ≜ x i − a . {\displaystyle \Delta x_{i+1}={\frac {3(f'')^{2}-2f'f'''}{12(f')^{2}}}(\Delta x_{i})^{3}+O[\Delta x_{i}]^{4},\qquad \Delta x_{i}\triangleq x_{i}-a.} == Halley's irrational method == Halley actually developed two third-order root-finding methods. The above, using only a division, is referred to as Halley's rational method. A second, "irrational" method uses a square root as well. It starts with f ( x n + 1 ) ≈ f ( x n ) + f ′ ( x n ) ( x n + 1 − x n ) + f ″ ( x n ) 2 ( x n + 1 − x n ) 2 {\displaystyle f(x_{n+1})\approx f(x_{n})+f'(x_{n})(x_{n+1}-x_{n})+{\frac {f''(x_{n})}{2}}(x_{n+1}-x_{n})^{2}} and solves f ( x n + 1 ) ≈ 0 {\displaystyle f(x_{n+1})\approx 0} for the value ( x n + 1 − x n ) {\displaystyle (x_{n+1}-x_{n})} using one of two forms of the quadratic formula. The quadratic formula solution either has the radical in the numerator: x n + 1 = x n − f ′ ( x n ) − sign ⁡ ( f ′ ( x n ) ) [ f ′ ( x n ) ] 2 − 2 f ( x n ) f ″ ( x n ) f ″ ( x n ) = x n − f ′ ( x n ) ( 1 − 1 − 2 f ( x n ) f ″ ( x n ) f ′ ( x n ) 2 ) f ″ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})-\operatorname {sign} (f'(x_{n})){\sqrt {[f'(x_{n})]^{2}-2f(x_{n})f''(x_{n})}}}{f''(x_{n})}}=x_{n}-{\frac {f'(x_{n})\left(1-{\sqrt {1-{\frac {2f(x_{n})f''(x_{n})}{f'(x_{n})^{2}}}}}\right)}{f''(x_{n})}}} or it has the radical in the denominator, yielding an update step that often gives better results: x n + 1 = x n − 2 f ( x n ) f ′ ( x n ) + sign ⁡ ( f ′ ( x n ) ) [ f ′ ( x n ) ] 2 − 2 f ( x n ) f ″ ( x n ) = x n − 2 f ( x n ) f ′ ( x n ) ( 1 + 1 − 2 f ( x n ) f ″ ( x n ) f ′ ( x n ) 2 ) {\displaystyle x_{n+1}=x_{n}-{\frac {2f(x_{n})}{f'(x_{n})+\operatorname {sign} (f'(x_{n})){\sqrt {[f'(x_{n})]^{2}-2f(x_{n})f''(x_{n})}}}}=x_{n}-{\frac {2f(x_{n})}{f'(x_{n})\left(1+{\sqrt {1-{\frac {2f(x_{n})f''(x_{n})}{f'(x_{n})^{2}}}}}\right)}}} This iteration was "deservedly preferred" to the rational method by Halley on the grounds that the denominator is smaller, making the division easier. A second advantage is that it tends to have about half of the error of the rational method, a benefit which multiplies as it is iterated. On a computer, it would appear to be slower as it has two slow operations (division and square root) instead of one, but on modern computers the reciprocal of the denominator can be computed at the same time as the square root via instruction pipelining, so the latency of each iteration differs very little.: 24  The formulation with the radical in the denominator reduces to Halley's rational method under the approximation that ⁠ 1 − z ≈ 1 − z / 2 {\displaystyle {\sqrt {1-z}}\approx 1-z/2} ⁠. Muller's method could be considered as modification of this method. So, this method can be used to find the complex roots. == References == == External links == Weisstein, Eric W. "Halley's method". MathWorld. Newton's method and high order iterations, Pascal Sebah and Xavier Gourdon, 2001 (the site has a link to a Postscript version for better formula display)
Wikipedia/Halley's_method
In numerical analysis, Bernoulli's method, named after Daniel Bernoulli, is a root-finding algorithm which calculates the root of largest absolute value of a univariate polynomial. The method works under the condition that there is only one root (possibly multiple) of maximal absolute value. The method computes the root of maximal absolute value as the limit of the quotients of two successive terms of a sequence defined by a linear recurrence whose coefficients are those of the polynomial. Since the method converges with a linear order only, it is less efficient than other methods, such as Newton's method. However, it can be useful for finding an initial guess ensuring that these other methods converge to the root of maximal absolute value. Bernoulli's method holds historical significance as an early approach to numerical root-finding and provides an elegant connection between recurrence relations and polynomial roots. == History == Bernoulli's method was first introduced by Swiss-French mathematician and physicist Daniel Bernoulli (1700-1782) in 1728. He noticed a trend from recurrent series created using polynomial coefficients growing by a ratio related to a root of the polynomial but did not prove why it worked. In 1725, Bernoulli moved to Saint Petersburg with his brother Nicolaus II Bernoulli, who unfortunately died of fever in 1726. While there, he worked closely with Leonhard Euler, a student of Johann Bernoulli, and made many advancements in harmonics, mathematical economics (see St. Petersburg paradox), and hydrodynamics. Euler called Bernoulli's method "frequently very useful" and gave a justification for why it works in 1748. The mathematician Joseph-Louis Lagrange expanded on this for the case of multiple roots in 1798. Bernoulli's method predates other root-finding algorithms like Graeffe's method (1826 to Dandelin) and is contemporary to Halley's method (1694). Since then, it has influenced the development of more modern algorithms such as the QD method. == The method == Given a polynomial p = a 0 z d + a 1 z d − 1 + ⋯ + a d {\displaystyle p=a_{0}z^{d}+a_{1}z^{d-1}+\dots +a_{d}} of degree d with complex coefficients. Choose d starting values x − d + 1 , x − d , … , x − 1 , x 0 , {\displaystyle x_{-d+1},x_{-d},\dots ,x_{-1},x_{0},} that are usually 0 , 0 , 0 , … , 0 , 1 {\displaystyle 0,0,0,\dots ,0,1} . Then, consider the sequence defined by the recurrence relation x n = − a 1 x n − 1 + a 2 x n − 2 + ⋯ + a d x n − d a 0 . {\displaystyle x_{n}=-{\frac {a_{1}x_{n-1}+a_{2}x_{n-2}+\dots +a_{d}x_{n-d}}{a_{0}}}.} Let q n = x n + 1 x n {\textstyle q_{n}={\frac {x_{n+1}}{x_{n}}}} be the ratio of successive terms of the sequence. If there is only one complex root of maximal absolute value, then the sequence of the ⁠ q n {\displaystyle q_{n}} ⁠ has a limit that is this root. If the coefficients of the polynomial are real then, via the complex conjugate root theorem, each of the polynomial's roots must be either a real number or part of a complex conjugate pair. Therefore, if the polynomial contains a single dominant complex root, then the coefficients must include a complex number, and so the sequence generated using the coefficients will contain complex numbers. Bernoulli's method will work to find a single dominant root, regardless of whether it is real or complex. If a root is part of a complex conjugate pair, then each root in the pair has the same maximal absolute value, and a modified form of Bernoulli's method is needed to calculate them. == Derivation of the method == The solutions of the n-th order difference equation a d x m + a n − 1 x m − 1 + ⋯ + a 0 x m − d = 0 {\displaystyle a_{d}x_{m}+a_{n-1}x_{m-1}+\dots +a_{0}x_{m-d}=0} have the form x m = c 1 ( m ) r 1 m + c 2 ( m ) r 2 m + ⋯ + c k ( m ) r k m , {\displaystyle x_{m}=c_{1}(m)r_{1}^{m}+c_{2}(m)r_{2}^{m}+\dots +c_{k}(m)r_{k}^{m},} where the r i {\displaystyle r_{i}} are the distinct complex roots of p, and c i ( m ) {\displaystyle c_{i}(m)} is polynomial in m of degree less than the multiplicity of r i {\displaystyle r_{i}} . For simple roots, c i {\displaystyle c_{i}} is a constant. The coefficients c i {\displaystyle c_{i}} can be determined from the first d terms of the sequence x m {\displaystyle x_{m}} by solving a linear system of equations. This system has always a unique solution since its matrix is a Vandermonde matrix if the roots are simple, or a confluent Vandermonde matrix otherwise. The quotient of two successive terms of the sequence is x m + 1 x m = c 1 ( m + 1 ) r 1 m + 1 + c 2 ( m + 1 ) r 2 m + 1 + ⋯ + c k ( m + 1 ) r k m + 1 c 1 ( m ) r 1 m + c 2 ( m ) r 2 m + ⋯ + c k ( m ) r k m . {\displaystyle {\frac {x_{m+1}}{x_{m}}}={\frac {c_{1}(m+1)r_{1}^{m+1}+c_{2}(m+1)r_{2}^{m+1}+\dots +c_{k}(m+1)r_{k}^{m+1}}{c_{1}(m)r_{1}^{m}+c_{2}(m)r_{2}^{m}+\dots +c_{k}(m)r_{k}^{m}}}.} Factoring out r 1 {\displaystyle r_{1}} gives x m + 1 x m = r 1 ⋅ c 1 ( m + 1 ) + c 2 ( m + 1 ) ( r 2 r 1 ) m + 1 + ⋯ + c k ( m + 1 ) ( r k r 1 ) m + 1 c 1 ( m ) + c 2 ( m ) ( r 2 r 1 ) m + ⋯ + c k ( m ) ( r k r 1 ) m . {\displaystyle {\frac {x_{m+1}}{x_{m}}}=r_{1}\cdot {\frac {c_{1}(m+1)+c_{2}(m+1)\left({\frac {r_{2}}{r_{1}}}\right)^{m+1}+\dots +c_{k}(m+1)\left({\frac {r_{k}}{r_{1}}}\right)^{m+1}}{c_{1}(m)+c_{2}(m)\left({\frac {r_{2}}{r_{1}}}\right)^{m}+\dots +c_{k}(m)\left({\frac {r_{k}}{r_{1}}}\right)^{m}}}.} Assuming r 1 {\displaystyle r_{1}} is the dominant root, such that | r 1 | > | r i | {\displaystyle |r_{1}|>|r_{i}|} for ⁠ i > 1 {\displaystyle i>1} ⁠, each ratio r i r 1 {\textstyle {\frac {r_{i}}{r_{1}}}} has an absolute value less than 1. Thus as m increases, ( r i r 1 ) m {\textstyle \left({\frac {r_{i}}{r_{1}}}\right)^{m}} approaches zero, so lim m → ∞ c i ( m ) ( r i r 1 ) m = 0 , {\textstyle \lim _{m\to \infty }c_{i}(m)\left({\frac {r_{i}}{r_{1}}}\right)^{m}=0,} even for non-constant c i ( m ) {\textstyle c_{i}(m)} . Hence the limit of the fraction is the same as that of c 1 ( m + 1 ) c 1 ( m ) , {\textstyle {\frac {c_{1}(m+1)}{c_{1}(m)}},} which is 1 if ⁠ c i {\displaystyle c_{i}} ⁠ is a constant or a nonzero polynomial in m. Hence lim m → ∞ x m + 1 x m = r 1 {\displaystyle \lim _{m\to \infty }{\frac {x_{m+1}}{x_{m}}}=r_{1}} in all cases where there is only one root of maximal absolute value. This assumes c 1 ≠ 0 , {\displaystyle c_{1}\neq 0,} which is satisfied by using initial values of all zeros followed by a final 1. Indeed, Cramer's rule implies that c 1 {\displaystyle c_{1}} is a signed quotient of two nonsingular Vandermonde matrices, if all roots are simple; in the case of multiple roots, the dominant coefficient of c 1 ( m ) {\displaystyle c_{1}(m)} is a signed quotient of two nonsingular confluent matrices. == Extensions == Bernoulli's method converges to the root of largest modulus of a polynomial with a linear order of convergence. It does not converge when there are two distinct complex roots of the same largest modulus, but there are extensions of the method that work in this case. For finding the root of smallest absolute value, one can apply the method on the reciprocal polynomial (polynomial obtained by reversing the order of the coefficients), and inverting the result. When using root deflation with something like Horner's method, deflating from the smallest root is more stable. To speed convergence, Alexander Aitken developed his Aitken delta-squared process as part of an improvement on his extension to Bernoulli's method, which also found all of the roots simultaneously. Another extension of Bernoulli's method is the Quotient-Difference (QD) method, which also finds all roots simultaneously, even though it can be unstable. Given the slow convergence of Bernoulli's method, and the instability of QD method, they can instead be used as reliable ways to find starting values for other root-finding algorithms, rather than iterated until tolerance. == Example == The following example illustrates Bernoulli's method applied to a quadratic polynomial. Let p ( z ) = z 2 − z − 1 = 0 {\textstyle p(z)=z^{2}-z-1=0} . Then a 0 = 1 {\displaystyle a_{0}=1} , a 1 = − 1 {\displaystyle a_{1}=-1} , and a 2 = − 1 {\displaystyle a_{2}=-1} , so the recurrence becomes: x n = − ( − 1 ) x n − 1 + ( − 1 ) x n − 2 1 = x n − 1 + x n − 2 {\displaystyle {\begin{aligned}x_{n}&=-{\frac {(-1)x_{n-1}+(-1)x_{n-2}}{1}}\\&=x_{n-1}+x_{n-2}\end{aligned}}} Using the recommended initial values x − 1 = 0 {\displaystyle x_{-1}=0} , x 0 = 1 {\displaystyle x_{0}=1} generates the following table: This eventually converges on 1 + 5 2 ≈ 1.618034 {\textstyle {\frac {1+{\sqrt {5}}}{2}}\approx 1.618034} , also known as the Golden ratio, which is the largest root of the example polynomial. The sequence x n {\displaystyle {x_{n}}} is also the well-known Fibonacci sequence. Bernoulli's method works even if the sequence used different starting values instead of 0 and 1; the limit of the quotient q n {\displaystyle q_{n}} remains the same. The example also shows the absolute error approaching zero as the sequence continues. It is then possible to calculate the order of convergence using three contiguous errors. This example demonstrates that Bernoulli's method converged linearly as it approaches the dominant root of the polynomial. == Comparison with other methods == Compared to other root-finding algorithms, Bernoulli's method offers distinct advantages and limitations. The following table summarizes several important differences of Bernoulli's method in comparison with other methods: === Advantages === No initial guess: Newton's method, Secant method, Halley's method, and other similar approaches, all require one or more starting values. Bernoulli's method requires only the polynomial coefficients, eliminating the need for an initial guess. No derivatives: Although derivatives of polynomials are straightforward with the power rule, this is a computation that is not required in Bernoulli's method. Naturally finds a dominant root: Normally, finding large roots is considered less stable, but substituting z in p with ( 1 z ) {\textstyle \left({\frac {1}{z}}\right)} , which reverses the order of coefficients, and then inverting the result of Bernoulli's method gives the smallest root of p, which is more stable. === Limitations === Slow convergence: Fröberg writes "As a rule, Bernoulli's method converges slowly, so instead, one ought to use, for example, the Newton-Raphson method." This is in contrast to Jennings, who writes "The approximate zeros obtained by the Bernoulli method can be further improved by applying, say, the Newton-Raphson method". One author argues for instead-of while the other promotes in-conjunction-with, due to the linear order of convergence. It is important to note that the method's slow convergence can be improved with Aitken's delta-squared process. Finds one root at a time: The standard version of Bernoulli's method finds a single root, requiring deflation to find another. When compared to algorithms such as Durand–Kerner method, Aberth method, Bairstow's method, and the "RPOLY" version of Jenkins–Traub algorithm they find multiple roots by default. One can overcome this limitation by applying an extension of Bernoulli's method such as the method by Aitken or QD method. Issues with multiples: Multiplicity and multiple dominant roots, such as conjugate pairs, can exacerbate the slowness of Bernoulli's method, yet improvements can be made to counter this. == Modern applications == Bernoulli's method, despite its linear convergence, remains relevant in computational mathematics with finding initial values for Polynomial root-finding algorithms and extensions to more general mathematical domains. It can also be used to find complex roots yet the more sophisticated extensions of Bernoulli's method, such as the one by Aitken and QD method, are able to find complex roots while solving for all of the roots simultaneously. There are also variations on Bernoulli's method that improve stability and handle multiple roots. A 2025 analysis of the QD method included an implementation in C. In related applications, Bernoulli's method has been shown to be equivalent to Power method on a companion matrix for finding eigenvalues. Advancements in systolic arrays have led to a parallelized version of Bernoulli's method. The method has also been generalized to find poles of rational functions, extending to the field of complex analysis. An extension of Bernoulli's method was used for improving linear multistep methods. Another development of a modified Bernoulli's method builds a supplemental function using Taylor and Laurent series expansions to then solve for roots. An implementation of Bernoulli's method is included with the CodeCogs open source numerical methods library. The method was also programmed on the EDSAC, along with Graeffe's method, but Newton's method was preferred for being faster. == Code == Bernoulli's method is implemented below in the Python programming language. An efficiency improvement would be to normalize the coefficients by the leading coefficient at the beginning of the method (c = [coef / c[0] for coef in c]), eliminating the need for a division operation in the recurrence relation inside the main loop body (x.append(-sum(c[k] * x[-k + i] for k in range(1, n)))). This change does not impact the convergence order of the method. Implementing higher-order convergence would require Aitken's delta-squared process. == See also == Aitken's delta-squared process Graeffe's method Horner's method Lehmer-Schur algorithm List of things named after members of the Bernoulli family Polynomial root-finding == References ==
Wikipedia/Bernoulli's_method
In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard Bairstow. The algorithm finds the roots in complex conjugate pairs using only real arithmetic. See root-finding algorithm for other algorithms. == Description of the method == Bairstow's approach is to use Newton's method to adjust the coefficients u and v in the quadratic x 2 + u x + v {\displaystyle x^{2}+ux+v} until its roots are also roots of the polynomial being solved. The roots of the quadratic may then be determined, and the polynomial may be divided by the quadratic to eliminate those roots. This process is then iterated until the polynomial becomes quadratic or linear, and all the roots have been determined. Long division of the polynomial to be solved P ( x ) = ∑ i = 0 n a i x i {\displaystyle P(x)=\sum _{i=0}^{n}a_{i}x^{i}} by x 2 + u x + v {\displaystyle x^{2}+ux+v} yields a quotient Q ( x ) = ∑ i = 0 n − 2 b i x i {\displaystyle Q(x)=\sum _{i=0}^{n-2}b_{i}x^{i}} and a remainder c x + d {\displaystyle cx+d} such that P ( x ) = ( x 2 + u x + v ) ( ∑ i = 0 n − 2 b i x i ) + ( c x + d ) . {\displaystyle P(x)=(x^{2}+ux+v)\left(\sum _{i=0}^{n-2}b_{i}x^{i}\right)+(cx+d).} A second division of Q ( x ) {\displaystyle Q(x)} by x 2 + u x + v {\displaystyle x^{2}+ux+v} is performed to yield a quotient R ( x ) = ∑ i = 0 n − 4 f i x i {\displaystyle R(x)=\sum _{i=0}^{n-4}f_{i}x^{i}} and remainder g x + h {\displaystyle gx+h} with Q ( x ) = ( x 2 + u x + v ) ( ∑ i = 0 n − 4 f i x i ) + ( g x + h ) . {\displaystyle Q(x)=(x^{2}+ux+v)\left(\sum _{i=0}^{n-4}f_{i}x^{i}\right)+(gx+h).} The variables c , d , g , h {\displaystyle c,\,d,\,g,\,h} , and the { b i } , { f i } {\displaystyle \{b_{i}\},\;\{f_{i}\}} are functions of u {\displaystyle u} and v {\displaystyle v} . They can be found recursively as follows. b n = b n − 1 = 0 , f n = f n − 1 = 0 , b i = a i + 2 − u b i + 1 − v b i + 2 f i = b i + 2 − u f i + 1 − v f i + 2 ( i = n − 2 , … , 0 ) , c = a 1 − u b 0 − v b 1 , g = b 1 − u f 0 − v f 1 , d = a 0 − v b 0 , h = b 0 − v f 0 . {\displaystyle {\begin{aligned}b_{n}&=b_{n-1}=0,&f_{n}&=f_{n-1}=0,\\b_{i}&=a_{i+2}-ub_{i+1}-vb_{i+2}&f_{i}&=b_{i+2}-uf_{i+1}-vf_{i+2}\qquad (i=n-2,\ldots ,0),\\c&=a_{1}-ub_{0}-vb_{1},&g&=b_{1}-uf_{0}-vf_{1},\\d&=a_{0}-vb_{0},&h&=b_{0}-vf_{0}.\end{aligned}}} The quadratic evenly divides the polynomial when c ( u , v ) = d ( u , v ) = 0. {\displaystyle c(u,v)=d(u,v)=0.\,} Values of u {\displaystyle u} and v {\displaystyle v} for which this occurs can be discovered by picking starting values and iterating Newton's method in two dimensions [ u v ] := [ u v ] − [ ∂ c ∂ u ∂ c ∂ v ∂ d ∂ u ∂ d ∂ v ] − 1 [ c d ] := [ u v ] − 1 v g 2 + h ( h − u g ) [ − h g − g v g u − h ] [ c d ] {\displaystyle {\begin{bmatrix}u\\v\end{bmatrix}}:={\begin{bmatrix}u\\v\end{bmatrix}}-{\begin{bmatrix}{\frac {\partial c}{\partial u}}&{\frac {\partial c}{\partial v}}\\[3pt]{\frac {\partial d}{\partial u}}&{\frac {\partial d}{\partial v}}\end{bmatrix}}^{-1}{\begin{bmatrix}c\\d\end{bmatrix}}:={\begin{bmatrix}u\\v\end{bmatrix}}-{\frac {1}{vg^{2}+h(h-ug)}}{\begin{bmatrix}-h&g\\[3pt]-gv&gu-h\end{bmatrix}}{\begin{bmatrix}c\\d\end{bmatrix}}} until convergence occurs. This method to find the zeroes of polynomials can thus be easily implemented with a programming language or even a spreadsheet. == Example == The task is to determine a pair of roots of the polynomial f ( x ) = 6 x 5 + 11 x 4 − 33 x 3 − 33 x 2 + 11 x + 6. {\displaystyle f(x)=6\,x^{5}+11\,x^{4}-33\,x^{3}-33\,x^{2}+11\,x+6.} As first quadratic polynomial one may choose the normalized polynomial formed from the leading three coefficients of f(x), u = a n − 1 a n = 11 6 ; v = a n − 2 a n = − 33 6 . {\displaystyle u={\frac {a_{n-1}}{a_{n}}}={\frac {11}{6}};\quad v={\frac {a_{n-2}}{a_{n}}}=-{\frac {33}{6}}.\,} The iteration then produces the table After eight iterations the method produced a quadratic factor that contains the roots −1/3 and −3 within the represented precision. The step length from the fourth iteration on demonstrates the superlinear speed of convergence. == Performance == Bairstow's algorithm inherits the local quadratic convergence of Newton's method, except in the case of quadratic factors of multiplicity higher than 1, when convergence to that factor is linear. A particular kind of instability is observed when the polynomial has odd degree and only one real root. Quadratic factors that have a small value at this real root tend to diverge to infinity. The images represent pairs ( s , t ) ∈ [ − 3 , 3 ] 2 {\displaystyle (s,t)\in [-3,3]^{2}} . Points in the upper half plane t > 0 correspond to a linear factor with roots s ± i t {\displaystyle s\pm it} , that is x 2 + u x + v = ( x − s ) 2 + t 2 {\displaystyle x^{2}+ux+v=(x-s)^{2}+t^{2}} . Points in the lower half plane t < 0 correspond to quadratic factors with roots s ± t {\displaystyle s\pm t} , that is, x 2 + u x + v = ( x − s ) 2 − t 2 {\displaystyle x^{2}+ux+v=(x-s)^{2}-t^{2}} , so in general ( u , v ) = ( − 2 s , s 2 + t | t | ) {\displaystyle (u,\,v)=(-2s,\,s^{2}+t\,|t|)} . Points are colored according to the final point of the Bairstow iteration, black points indicate divergent behavior. The first image is a demonstration of the single real root case. The second indicates that one can remedy the divergent behavior by introducing an additional real root, at the cost of slowing down the speed of convergence. One can also in the case of odd degree polynomials first find a real root using Newton's method and/or an interval shrinking method, so that after deflation a better-behaved even-degree polynomial remains. The third image corresponds to the example above. == References == == External links == Bairstow's Algorithm on Mathworld Numerical Recipes in Fortran 77 Online Example polynomial root solver (deg(P) ≤ 10) using Bairstow's Method LinBairstowSolve, an open-source C++ implementation of the Lin-Bairstow method available as a method of the VTK library Online root finding of a polynomial – Bairstow's method by Farhad Mazlumi
Wikipedia/Bairstow's_method
In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method. For polynomials, more elaborate methods exist for testing the existence of a root in an interval (Descartes' rule of signs, Sturm's theorem, Budan's theorem). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see Real-root isolation. == The method == The method is applicable for numerically solving the equation f ( x ) = 0 {\displaystyle f(x)=0} for the real variable x {\displaystyle x} , where f {\displaystyle f} is a continuous function defined on an interval [ a , b ] {\displaystyle [a,b]} and where f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} have opposite signs. In this case a {\displaystyle a} and b {\displaystyle b} are said to bracket a root since, by the intermediate value theorem, the continuous function f {\displaystyle f} must have at least one root in the interval ( a , b ) {\displaystyle (a,b)} . At each step the method divides the interval in two parts/halves by computing the midpoint c = ( a + b ) / 2 {\displaystyle c=(a+b)/2} of the interval and the value of the function f ( c ) {\displaystyle f(c)} at that point. If c {\displaystyle c} itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either f ( a ) {\displaystyle f(a)} and f ( c ) {\displaystyle f(c)} have opposite signs and bracket a root, or f ( c ) {\displaystyle f(c)} and f ( b ) {\displaystyle f(b)} have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of f {\displaystyle f} is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small. Explicitly, if f ( c ) = 0 {\displaystyle f(c)=0} then c {\displaystyle c} may be taken as the solution and the process stops. Otherwise, if f ( a ) {\displaystyle f(a)} and f ( c ) {\displaystyle f(c)} have opposite signs, then the method sets c {\displaystyle c} as the new value for b {\displaystyle b} , and if f ( b ) {\displaystyle f(b)} and f ( c ) {\displaystyle f(c)} have opposite signs then the method sets c {\displaystyle c} as the new a {\displaystyle a} . In both cases, the new f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} have opposite signs, so the method is applicable to this smaller interval. === Stopping condition === The input for the method is a continuous function f {\displaystyle f} , an interval [ a , b ] {\displaystyle [a,b]} , and the function values f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} . The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps: Calculate c {\displaystyle c} , the midpoint of the interval, : c = { a + b 2 , if a × b ≤ 0 a + b − a 2 , if a × b > 0 {\displaystyle \qquad c={\begin{cases}{\tfrac {a+b}{2}},&{\text{if }}a\times b\leq 0\\\,a+{\tfrac {b-a}{2}},&{\text{if }}a\times b>0\end{cases}}} Calculate the function value at the midpoint, f ( c ) {\displaystyle f(c)} . If convergence is satisfactory (see below), return c {\displaystyle c} and stop iterating. Examine the sign of f ( c ) {\displaystyle f(c)} and replace either ( a , f ( a ) ) {\displaystyle (a,f(a))} or ( b , f ( b ) ) {\displaystyle (b,f(b))} with ( c , f ( c ) ) {\displaystyle (c,f(c))} so that there is a zero crossing within the new interval. In order to determine when the iteration should stop, it is necessary to consider what is meant by the concept of 'tolerance' ( ϵ {\displaystyle \epsilon } ). Burden & Faires state: "we can select a tolerance ϵ > 0 {\displaystyle \epsilon >0} and generate c1, ..., cN until one of the following conditions is met: Unfortunately, difficulties can arise using any of these stopping criteria ... Without additional knowledge about f {\displaystyle f} or c {\displaystyle c} , inequality (2.2) is the best stopping criterion to apply because it comes closest to testing relative error." (Note: c {\displaystyle c} has been used here as it is more common than Burden and Faire's ′ p ′ {\displaystyle 'p'} .) The objective is to find an approximation, within the tolerance, to the root. It can be seen that (2.3) | f ( c N ) | < ϵ {\displaystyle |f(c_{N})|<\epsilon } does not give such an approximation unless the slope of the function at c N {\displaystyle c_{N}} is in the neighborhood of ± 1 {\displaystyle \pm 1} . Suppose, for the purpose of illustration, the tolerance ϵ = 5 × 10 − 7 {\displaystyle \epsilon =5\times 10^{-7}} . Then, for a function such as f ( x ) = 10 − m ∗ ( x − 1 ) {\displaystyle f(x)=10^{-m}*(x-1)} , | f ( c ) | = 10 − m | x − 1 | < 5 × 10 − 7 {\displaystyle |f(c)|=10^{-m}|x-1|<5\times 10^{-7}} so | x − 1 | < 5 × 10 m − 7 {\displaystyle |x-1|<5\times 10^{m-7}} This means that any number x in [ 1 − 5 × 10 m − 7 , 1 + 5 × 10 m − 7 ] {\displaystyle [1-5\times 10^{m-7},1+5\times 10^{m-7}]} would be a 'good' approximation to the root. If m = 10 {\displaystyle m=10} , the approximation to the root 1 would be in [ 1 − 5000 , 1 + 5000 ] = [ − 4999 , 5001 ] {\displaystyle [1-5000,1+5000]=[-4999,5001]} . -- a very poor result. As (2.3) does not appear to give acceptable results, (2.1) and (2.2) need to be evaluated. The following Python script compares the behavior for those two stopping conditions. def bisect(f, a, b, tolerance): fa = f(a) fb = f(b) i = 0 stop_a = [] stop_r = [] while True: i += 1 c = a + (b - a) / 2 fc = f(c) if c < 10: # For small root if not stop_a: print('{:3d} {:18.16f} {:18.16f} {:18.16e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: # large root print('{:3d} {:18.16f} {:18.16f} {:18.16e} | ----- {:5.2e}' .format(i, a, b, c, b - a)) else: if not stop_r: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} ----- ' .format(i, a, b, c, b - a)) if fc == 0: return [c, i] if (b - a <= abs(c) * tolerance) & (stop_r == []): stop_r = [c, i] if (b - a <= tolerance) & (stop_a == []): stop_a = [c, i] if np.sign(fa) == np.sign(fc): a = c fa = fc else: b = c fb = fc if (stop_r != []) & (stop_a != []): return [stop_a, stop_r] The first function to be tested is one with a small root i.e. f ( x ) = x − 0.00000000123456789 {\displaystyle f(x)=x-0.00000000123456789} print(' i a b c b - a (b - a)/c') f = lambda x: x - 0.00000000123456789 res = bisect(f, 0, 1, 5e-7) print('In {:2d} steps the absolute error case gives {:20.18F}'.format(res[0][1], res[0][0])) print('In {:2d} steps the relative error case gives {:20.18F}'.format(res[1][1], res[1][0])) print(' as the approximation to 0.00000000123456789') i a b c b - a (b - a)/c 1 0.0000000000000000 1.0000000000000000 5.0000000000000000e-01 | 1.00e+00 2.00e+00 2 0.0000000000000000 0.5000000000000000 2.5000000000000000e-01 | 5.00e-01 2.00e+00 3 0.0000000000000000 0.2500000000000000 1.2500000000000000e-01 | 2.50e-01 2.00e+00 4 0.0000000000000000 0.1250000000000000 6.2500000000000000e-02 | 1.25e-01 2.00e+00 5 0.0000000000000000 0.0625000000000000 3.1250000000000000e-02 | 6.25e-02 2.00e+00 6 0.0000000000000000 0.0312500000000000 1.5625000000000000e-02 | 3.12e-02 2.00e+00 7 0.0000000000000000 0.0156250000000000 7.8125000000000000e-03 | 1.56e-02 2.00e+00 8 0.0000000000000000 0.0078125000000000 3.9062500000000000e-03 | 7.81e-03 2.00e+00 9 0.0000000000000000 0.0039062500000000 1.9531250000000000e-03 | 3.91e-03 2.00e+00 10 0.0000000000000000 0.0019531250000000 9.7656250000000000e-04 | 1.95e-03 2.00e+00 11 0.0000000000000000 0.0009765625000000 4.8828125000000000e-04 | 9.77e-04 2.00e+00 12 0.0000000000000000 0.0004882812500000 2.4414062500000000e-04 | 4.88e-04 2.00e+00 13 0.0000000000000000 0.0002441406250000 1.2207031250000000e-04 | 2.44e-04 2.00e+00 14 0.0000000000000000 0.0001220703125000 6.1035156250000000e-05 | 1.22e-04 2.00e+00 15 0.0000000000000000 0.0000610351562500 3.0517578125000000e-05 | 6.10e-05 2.00e+00 16 0.0000000000000000 0.0000305175781250 1.5258789062500000e-05 | 3.05e-05 2.00e+00 17 0.0000000000000000 0.0000152587890625 7.6293945312500000e-06 | 1.53e-05 2.00e+00 18 0.0000000000000000 0.0000076293945312 3.8146972656250000e-06 | 7.63e-06 2.00e+00 19 0.0000000000000000 0.0000038146972656 1.9073486328125000e-06 | 3.81e-06 2.00e+00 20 0.0000000000000000 0.0000019073486328 9.5367431640625000e-07 | 1.91e-06 2.00e+00 21 0.0000000000000000 0.0000009536743164 4.7683715820312500e-07 | 9.54e-07 2.00e+00 22 0.0000000000000000 0.0000004768371582 2.3841857910156250e-07 | 4.77e-07 2.00e+00 23 0.0000000000000000 0.0000002384185791 1.1920928955078125e-07 | ----- 2.38e-07 24 0.0000000000000000 0.0000001192092896 5.9604644775390625e-08 | ----- 1.19e-07 25 0.0000000000000000 0.0000000596046448 2.9802322387695312e-08 | ----- 5.96e-08 26 0.0000000000000000 0.0000000298023224 1.4901161193847656e-08 | ----- 2.98e-08 27 0.0000000000000000 0.0000000149011612 7.4505805969238281e-09 | ----- 1.49e-08 28 0.0000000000000000 0.0000000074505806 3.7252902984619141e-09 | ----- 7.45e-09 29 0.0000000000000000 0.0000000037252903 1.8626451492309570e-09 | ----- 3.73e-09 30 0.0000000000000000 0.0000000018626451 9.3132257461547852e-10 | ----- 1.86e-09 31 0.0000000009313226 0.0000000018626451 1.3969838619232178e-09 | ----- 9.31e-10 32 0.0000000009313226 0.0000000013969839 1.1641532182693481e-09 | ----- 4.66e-10 33 0.0000000011641532 0.0000000013969839 1.2805685400962830e-09 | ----- 2.33e-10 34 0.0000000011641532 0.0000000012805685 1.2223608791828156e-09 | ----- 1.16e-10 35 0.0000000012223609 0.0000000012805685 1.2514647096395493e-09 | ----- 5.82e-11 36 0.0000000012223609 0.0000000012514647 1.2369127944111824e-09 | ----- 2.91e-11 37 0.0000000012223609 0.0000000012369128 1.2296368367969990e-09 | ----- 1.46e-11 38 0.0000000012296368 0.0000000012369128 1.2332748156040907e-09 | ----- 7.28e-12 39 0.0000000012332748 0.0000000012369128 1.2350938050076365e-09 | ----- 3.64e-12 40 0.0000000012332748 0.0000000012350938 1.2341843103058636e-09 | ----- 1.82e-12 41 0.0000000012341843 0.0000000012350938 1.2346390576567501e-09 | ----- 9.09e-13 42 0.0000000012341843 0.0000000012346391 1.2344116839813069e-09 | ----- 4.55e-13 43 0.0000000012344117 0.0000000012346391 1.2345253708190285e-09 | ----- 2.27e-13 44 0.0000000012345254 0.0000000012346391 1.2345822142378893e-09 | ----- 1.14e-13 45 0.0000000012345254 0.0000000012345822 1.2345537925284589e-09 | ----- 5.68e-14 46 0.0000000012345538 0.0000000012345822 1.2345680033831741e-09 | ----- 2.84e-14 47 0.0000000012345538 0.0000000012345680 1.2345608979558165e-09 | ----- 1.42e-14 48 0.0000000012345609 0.0000000012345680 1.2345644506694953e-09 | ----- 7.11e-15 49 0.0000000012345645 0.0000000012345680 1.2345662270263347e-09 | ----- 3.55e-15 50 0.0000000012345662 0.0000000012345680 1.2345671152047544e-09 | ----- 1.78e-15 51 0.0000000012345671 0.0000000012345680 1.2345675592939642e-09 | ----- 8.88e-16 52 0.0000000012345676 0.0000000012345680 1.2345677813385691e-09 | ----- 4.44e-16 In 22 steps the absolute error case gives 0.000000238418579102 In 52 steps the relative error case gives 0.000000001234567781 as the approximation to 0.00000000123456789 The reason that the absolute difference method gives such a poor result is that it measures 'decimal places' of accuracy - but those decimal places may contain only 0's so have no useful information. That means that the 6 zeros after the decimal point in 0.000000238418579102 match the first 6 in 0.00000000123456789 so the absolute difference is less than ϵ = 5 × 10 − 7 {\displaystyle \epsilon =5\times 10^{-7}} . On the other hand, the relative difference method measures 'significant digits' and represents a much better approximation to the position of the root. The next example is print(' i a b c b - a (b - a)/c') res = bisect(fun, 1234550, 1234581, 5e-7) print('In %2d steps the absolute error case gives %20.18F' % (res[0][1], res[0][0])) print('In %2d steps the relative error case gives %20.18F' % (res[1][1], res[1][0])) print(' as the approximation to 1234567.89012456789') i a b c b - a (b - a)/c 1 1234550.0000000 1234581.0000000 1.2345655e+06 | 3.10e+01 2.51e-05 2 1234565.5000000 1234581.0000000 1.2345732e+06 | 1.55e+01 1.26e-05 3 1234565.5000000 1234573.2500000 1.2345694e+06 | 7.75e+00 6.28e-06 4 1234565.5000000 1234569.3750000 1.2345674e+06 | 3.88e+00 3.14e-06 5 1234567.4375000 1234569.3750000 1.2345684e+06 | 1.94e+00 1.57e-06 6 1234567.4375000 1234568.4062500 1.2345679e+06 | 9.69e-01 7.85e-07 7 1234567.4375000 1234567.9218750 1.2345677e+06 | 4.84e-01 3.92e-07 8 1234567.6796875 1234567.9218750 1.2345678e+06 | 2.42e-01 ----- 9 1234567.8007812 1234567.9218750 1.2345679e+06 | 1.21e-01 ----- 10 1234567.8613281 1234567.9218750 1.2345679e+06 | 6.05e-02 ----- 11 1234567.8613281 1234567.8916016 1.2345679e+06 | 3.03e-02 ----- 12 1234567.8764648 1234567.8916016 1.2345679e+06 | 1.51e-02 ----- 13 1234567.8840332 1234567.8916016 1.2345679e+06 | 7.57e-03 ----- 14 1234567.8878174 1234567.8916016 1.2345679e+06 | 3.78e-03 ----- 15 1234567.8897095 1234567.8916016 1.2345679e+06 | 1.89e-03 ----- 16 1234567.8897095 1234567.8906555 1.2345679e+06 | 9.46e-04 ----- 17 1234567.8897095 1234567.8901825 1.2345679e+06 | 4.73e-04 ----- 18 1234567.8899460 1234567.8901825 1.2345679e+06 | 2.37e-04 ----- 19 1234567.8900642 1234567.8901825 1.2345679e+06 | 1.18e-04 ----- 20 1234567.8901234 1234567.8901825 1.2345679e+06 | 5.91e-05 ----- 21 1234567.8901234 1234567.8901529 1.2345679e+06 | 2.96e-05 ----- 22 1234567.8901234 1234567.8901381 1.2345679e+06 | 1.48e-05 ----- 23 1234567.8901234 1234567.8901308 1.2345679e+06 | 7.39e-06 ----- 24 1234567.8901234 1234567.8901271 1.2345679e+06 | 3.70e-06 ----- 25 1234567.8901234 1234567.8901252 1.2345679e+06 | 1.85e-06 ----- 26 1234567.8901243 1234567.8901252 1.2345679e+06 | 9.24e-07 ----- 27 1234567.8901243 1234567.8901248 1.2345679e+06 | 4.62e-07 ----- In 27 steps the absolute error case gives 1234567.890124522149562836 In 7 steps the relative error case gives 1234567.679687500000000000 as the approximation to 1234567.89012456789 In this case, the absolute difference tries to get 6 decimal places even though there are 7 digits before the decimal point. The relative difference gives 7 significant digits - all before the decimal point. These two examples show that the relative difference method produces much more satisfactory results than does the absolute difference method. A common idea used in algorithms for the bisection method is to do a computation to predetermine the number of steps required to achieve a desired accuracy. This is done by noting that, after n {\displaystyle n} bisections, the maximum difference between the root and the approximation is | c n − c | ≤ | b − a | 2 n < ϵ . {\displaystyle |c_{n}-c|\leq {\frac {|b-a|}{2^{n}}}<\epsilon .} This formula has been used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root within a certain number of decimal places. The number n of iterations needed to achieve such a required tolerance ε is bounded by n ≤ ⌈ log 2 ⁡ ( b − a ϵ ) ⌉ {\displaystyle n\leq \left\lceil \log _{2}\left({\frac {b-a}{\epsilon }}\right)\right\rceil } The problem is that the number of iterations is determined by using the absolute difference method and hence should not be applied. An alternative approach has been suggested by MIT: http://web.mit.edu/10.001/Web/Tips/Converge.htm Convergence Tests, RTOL and ATOL Tolerances are usually specified as either a relative tolerance RTOL or an absolute tolerance ATOL, or both. The user typically desires that | True value -- Computed value | < RTOL*|True Value| + ATOL (Eq.1) where the RTOL controls the number of significant figures in the computed value (a float or a double), and a small ATOL is a just a "safety net" for the case where True Value is close to zero. (What would happen if ATOL = 0 and True Value = 0? Would the convergence test ever be satisfied?) You should write your programs to take both RTOL and ATOL as inputs." If the 'True Value' is large, then the 'RTOL' term will control the error so this would help in that case. If the 'True Value' is small, then the error will be controlled by ATOL - this will make things worse. The question is asked "(What would happen if ATOL = 0 and True Value = 0?. Would the convergence test ever be satisfied?)"- but no attempt is made to answer it. The answer to this question will follow. == IEEE Standard-754 for Computer Arithmetic == If the algorithm is being used in the real number system, it is possible to continue the bisection until the relative error produces the desired approximation. If the algorithm is used with computer arithmetic, a further problem arises. In order to improve reliably and portably, the Institute of Electrical and Electronics Engineers (IEEE) produced a standard for floating point arithmetic in 1985 and has revised it in 2008 and 2019; see IEEE 754. The IEEE Standard 754 representation is the standard used in most micro-computers. It is, for example, the basis of the PC floating point processor. Double-precision numbers occupy 64 bits which are divided into a sign bit (+/-), an exponent of 10 bits, and a fractional part of 53 bits. In order to allow for fractions (negative exponents), the exponent is biased to make the effective number of bits for the exponent 9. The effective values of the exponent with 0 < e ≤ 1023 would be ( 2 − 511 , 2 512 ) {\displaystyle (2^{-511},2^{512})} making the double precision numbers take the form ( − 1 ) s 2 e − 511 0. f {\displaystyle (-1)^{s}2^{e-511}0.f} The extreme range for a positive DP number would then be ( 1.492 × 10 − 154 , 1.341 × 10 154 ) {\displaystyle (1.492\times 10^{-154},1.341\times 10^{154})} Because the fraction would normally have a non-zero leading digit (a 1 for binary) that bit does not need to be stored as the processor will supply it. As a result, the 53 bit fraction can be stored in 52 bits so the other bit can be used in the exponent to give an actual range of 0 < e ≤ 2047. The range can be further extended by putting the assumed 1 before the binary point. If both the exponent and fraction are 0, then the number is 0 (with a sign). In order to deal with 3 other extreme situations, an exponent of 2047 is reserved for NaN (Not a Number - such as division by 0) and the infinities. A number is thus stored in the following form: The following are examples of some double precision numbers: The first one (decimal 3) illustrates that 3 (binary 11) has a single one In the fraction part - the other 1 is assumed. The second one Is an example for which the exponent is 2047 ( + ∞ ) {\displaystyle (+\infty )} . The third one gives the largest number which can be represented in double precision arithmetic. Note that 1.7976931348623157e+308 + 0.0000000000000001e+308 = inf The next one, the minimum normal, represents the smallest number that can be used with full double precision. The maximum subnormal and the minimum subnormal represent a range of numbers that have less than full double precision. It is the minimum subnormal, that is crucial for the bisection algorithm. If b − a < 9.8813129168249309 × 10 − 324 {\displaystyle b-a<9.8813129168249309\times 10^{-324}} (2 X the min.subnormal) the interval can not be divided and the process must stop. == Algorithm == import numpy as np import math def bisect(f, a, b, tol, bound=9.8813129168249309e-324): ############################################################################E # input: Function f, # endpoint values a, b, # tolerance tol, (if tol = 5e-t and bound = 9.0e-324 the function # returns t significant digits for a root between the # minimum normal and the maximum normal), # bound (if bound=9.8813129168249309e-324, the algorithm continues # until the interval cannot be further divided, a larger value # may result in termination before t digits are found). # conditions: f is a continuous function in the interval [a, b], # a < b, # and f(a)*f(b) < 0. # output: [root, iterations, convergence, termination condition] #############################################################################N if b <= a: return [float("NAN"), 0, "No convergence", "b < a"] fa = f(a) fb = f(b) if np.sign(fa) == np.sign(fb): return [float("NAN"), 0, "No convergence", "f(a)*f(b) > 0"] en = 0 while en < 2200: en += 1 if np.sign(a) == np.sign(b): # avoid overflow c = a + (b - a)/2 else: c = (a + b)/2 fc = f(c) if b - a <= bound: return [bound, en, "No convergence", "Bound reached"] if fc == 0: return [c, en, "Converged", "f(c) = 0"] if b - a <= abs(c) * tol: return [c, en, "Converged", "Tolerance"] if np.sign(fa) == np.sign(fc): a = c fa = fc else: b = c return [float("NAN"), en, "No convergence", "Bad function"] The first 2 examples test for incorrect input values: 1 bisect(lambda x: x - 1, 5, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = nan No convergence after 0 iterations with termination b < a Final interval [ nan, nan] 2 bisect(lambda x: x - 1, 5, 7, 5.000000e-15, 9.8813129168249309e-324) Approx. root = nan No convergence after 0 iterations with termination f(a)*f(b) > 0 Final interval [ nan, nan] Large roots: 3 bisect(lambda x: x - 12345678901.23456, 0, 1.23457e+14, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 12345678901.23454 Converged after 62 iterations with termination Tolerance Final interval [1.2345678901234526e+10, 1.2345678901234552e+10] 4 bisect(lambda x: x - 1.23456789012456e+100, 0, 2e+100, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890124561e+100 Converged after 50 iterations with termination Tolerance Final interval [1.2345678901245599e+100, 1.2345678901245619e+100] The final interval is computed as [c - w/2, c + w/2] where w = b − a 2 n {\displaystyle w={\frac {b-a}{2^{n}}}} . This can give good measure as to the accuracy of the approximation Root near maximum: 5 bisect(lambda x: x - 1.234567890123456e+307, 0, 1e+308, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123454e+307 Converged after 52 iterations with termination Tolerance Final interval [1.2345678901234535e+307, 1.2345678901234555e+307] Small roots: 6 bisect(lambda x: x - 1.234567890123456e-05, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123455e-05 Converged after 65 iterations with termination Tolerance Final interval [1.2345678901234537e-05, 1.2345678901234564e-05] 7 bisect(lambda x: x - 1.234567890123456e-100, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123454e-100 Converged after 381 iterations with termination Tolerance Final interval [1.2345678901234532e-100, 1.2345678901234552e-100] Ex. 8 is beyond the minimum normal but gives a fairly good result because the approximation has a small interval. Calculations for values in the subnormal range can produce unexpected results. 8 bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567890123457e-310 Converged after 1071 iterations with termination f(c) = 0 Final interval [1.2345678901232595e-310, 1.2345678901236548e-310] If the return state is ' f ( c ) = 0 {\displaystyle f(c)=0} ', then the desired tolerance may not have been achieved. This can be checked by lowering the tolerance until a return state of 'Tolerance' is achieved. 8a bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-13) Approx. root = 1.234567890123457e-310 Converged after 1071 iterations with termination f(c) = 0 Final interval [1.2345678901232595e-310, 1.2345678901236548e-310] 8b bisect(lambda x: x - 1.234567890123457e-310, 0, 1, 5.000000e-12) Approx. root = 1.234567890124643e-310 Converged after 1069 iterations with termination Tolerance Final interval [1.2345678901238524e-310, 1.2345678901254334e-310] 8b shows that the result has 12 digits. Even though the root is outside the 'normal' range, it may still be possible to achieve results with good tolerance. 9 bisect(lambda x: x - 1.234567891003685e-315, 0, 1, 5.000000e-03, 9.8813129168249309e-324) Approx. root = 1.23558592808891e-315 Converged after 1055 iterations with termination Tolerance Final interval [1.2342907646422757e-315, 1.2368810915355439e-315] 1.2368810915355439e-315] Ex. 10 shows the maximum number of iterations that should be expected: 10 bisect(lambda x: x - 1.234567891003685e-315, -1e+307, 1e+307, 5.000000e-15, 9.8813129168249309e-324) Approx. root = 1.234567891003685e-315 Converged after 2093 iterations with termination f(c) = 0 Final interval [1.2345678910036845e-315, 1.2345678910036845e-315] There may be situations in which a 'good' approximation is not required. This can be achieved by changing the 'Bound': 11 bisect(lambda x: x - 1.234567890123457e-100, 0, 1, 5.000000e-15, 4.9999999999999997e-12) Approx. root = 5e-12 No convergence after 39 iterations with termination Bound reached Final interval [4.0905052982270715e-12, 5.9094947017729279e-12] Evaluation of the final interval may assist in determining accuracy. The following show the behavior of subnormal numbers And shows how the significant digits are lost: print(1.234567890123456e-310) 1.23456789012346e-310 print(1.234567890123456e-312) 1.234567890124e-312 print(1.234567890123456e-315) 1.23456789e-315 print(1.234567890123456e-317) 1.234568e-317 print(1.234567890123456e-319) 1.23457e-319 print(1.234567890123456e-321) 1.235e-321 print(1.234567890123456e-323) 1e-323 print(1.234567890123456e-324) 0.0 These examples show that this method gives 15 digit accuracy for functions of the form f ( x ) = ( x − r ) g ( x ) {\displaystyle f(x)=(x-r)g(x)} for all r {\displaystyle r} in the range of normal numbers. == Higher order roots == Further problems can arise from the use of computer arithmetic for higher order roots. To help in considering how to detect and correct inaccurate results consider the following: bisect(lambda x: (x - 1.23456789012345e-100), 0, 1, 5e-15) Approx. root = 1.23456789012345e-100 Converged after 381 iterations with termination f(c) = 0 Final interval [1.2345678901234491e-100, 1.2345678901234511e-100] The final interval [1.2345678901234491e-100, 1.2345678901234511e-100] indicates fairly good accuracy. The bisection method has a distinct advantage over other root finding techniques in that the final interval can be used to determine the accuracy of the final solution. This information will be useful in assessing the accuracy of some following examples. Next consider what happens for a root of order 3: bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-15) Approx. root = 1.234567898094279e-100 Converged after 357 iterations with termination f(c) = 0 Final interval [1.2345678810624394e-100, 1.2345679151261181e-100] The final interval [1.2345678810624394e-100, 1.2345679151261181e-100] indicates that 15 digits have not been returned. The relative error (1.234567898094279e-100 - 1.23456789012345e-100)/1.23456789012345e-100 = 6.456371473106003e-09 shows that only 8 digits are correct and again f ( c ) = 0 {\displaystyle f(c)=0} . This occurs because f ( a p p r o x . r o o t ) = f ( 1.234567898094279 ∗ 10 − 100 ) = ( 1.234567898094279 ∗ 10 − 100 − 1.23456789012345 ∗ 10 − 100 ) 3 = ( 7.970828885817127 ∗ 10 − 109 ) 3 = 5.064195 ∗ 10 2 ∗ 10 − 327 = 5.064195 ∗ 10 − 325 {\displaystyle {\begin{aligned}f(approx.root)&=f(1.234567898094279*10^{-100})\\&=(1.234567898094279*10^{-100}-1.23456789012345*10^{-100})^{3}\\&=(7.970828885817127*10^{-109})^{3}\\&=5.064195*10^{2}*10^{-327}\\&=5.064195*10^{-325}\end{aligned}}} Because this is less than the minimum subnormal, it returns a value of 0. This can occur in any root finding technique, not just the bisection method, and it is only the fact that the return conditions include the information about what stopping criteria was achieved that the problem can be diagnosed. The use of the relative error as a stopping condition allows us to determine how accurate a solution can be obtained. Consider what happens on trying to achieve 8 significant figures: bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-8) [1.2345678980942788e-100, 357, 'Converged', 'f(c) = 0'] f ( c ) = 0 {\displaystyle f(c)=0} Indicates that eight digits of accuracy have not been achieved, so try bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-4) [1.2347947281308757e-100, 344, 'Converged', 'Tolerance'] At least four digits have been achieved and bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-6) [1.2345658202098768e-100, 351, 'Converged', 'Tolerance'] 6 digit convergence bisect(lambda x: (x - 1.23456789012345e-100)**3, 0, 1, 5e-7) [1.2345677277758852e-100, 354, 'Converged', 'Tolerance'] 7 digit convergence A similar problem can arise if there are two small roots close together: bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-15) [1.2345678901234481e-23, 125, 'Converged', 'Tolerance'] 15 digit convergence bisect(lambda x: (x - 1.23456789012345e-24)*x, 1e-300, 1e-20, 5e-1) [1.5509016039626554e-300, 931, 'Converged', 'f(c) = 0'] Final interval [1.2754508019813276e-300, 1.8263524059439830e-300] relative error = 3.5521376891678086e-1 -- 1 digit convergence bisect(lambda x: (x - 1.23456789012345e-23)*x, 1e-300, 1, 5e-1) [1.1580528575742387e-23, 79, 'Converged', 'Tolerance'] Final interval [1.0753347963189360e-23, 1.2407709188295415e-23] relative error = 1.4285714285714285e-1 -- 1 digit convergence (The following has not been changed.) == Generalization to higher dimensions == The bisection method has been generalized to multi-dimensional functions. Such methods are called generalized bisection methods. === Methods based on degree computation === Some of these methods are based on computing the topological degree, which for a bounded region Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} and a differentiable function f : R n → R n {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} is defined as a sum over its roots: deg ⁡ ( f , Ω ) := ∑ y ∈ f − 1 ( 0 ) sgn ⁡ det ( D f ( y ) ) {\displaystyle \deg(f,\Omega ):=\sum _{y\in f^{-1}(\mathbf {0} )}\operatorname {sgn} \det(Df(y))} , where D f ( y ) {\displaystyle Df(y)} is the Jacobian matrix, 0 = ( 0 , 0 , . . . , 0 ) T {\displaystyle \mathbf {0} =(0,0,...,0)^{T}} , and sgn ⁡ ( x ) = { 1 , x > 0 0 , x = 0 − 1 , x < 0 {\displaystyle \operatorname {sgn}(x)={\begin{cases}1,&x>0\\0,&x=0\\-1,&x<0\\\end{cases}}} is the sign function. In order for a root to exist, it is sufficient that deg ⁡ ( f , Ω ) ≠ 0 {\displaystyle \deg(f,\Omega )\neq 0} , and this can be verified using a surface integral over the boundary of Ω {\displaystyle \Omega } . === Characteristic bisection method === The characteristic bisection method uses only the signs of a function in different points. Lef f be a function from Rd to Rd, for some integer d ≥ 2. A characteristic polyhedron (also called an admissible polygon) of f is a polytope in Rd, having 2d vertices, such that in each vertex v, the combination of signs of f(v) is unique and the topological degree of f on its interior is not zero (a necessary criterion to ensure the existence of a root). For example, for d=2, a characteristic polyhedron of f is a quadrilateral with vertices (say) A,B,C,D, such that: ⁠ sgn ⁡ f ( A ) = ( − , − ) {\displaystyle \operatorname {sgn} f(A)=(-,-)} ⁠, that is, f1(A)<0, f2(A)<0. ⁠ sgn ⁡ f ( B ) = ( − , + ) {\displaystyle \operatorname {sgn} f(B)=(-,+)} ⁠, that is, f1(B)<0, f2(B)>0. ⁠ sgn ⁡ f ( C ) = ( + , − ) {\displaystyle \operatorname {sgn} f(C)=(+,-)} ⁠, that is, f1(C)>0, f2(C)<0. ⁠ sgn ⁡ f ( D ) = ( + , + ) {\displaystyle \operatorname {sgn} f(D)=(+,+)} ⁠, that is, f1(D)>0, f2(D)>0. A proper edge of a characteristic polygon is a edge between a pair of vertices, such that the sign vector differs by only a single sign. In the above example, the proper edges of the characteristic quadrilateral are AB, AC, BD and CD. A diagonal is a pair of vertices, such that the sign vector differs by all d signs. In the above example, the diagonals are AD and BC. At each iteration, the algorithm picks a proper edge of the polyhedron (say, A—B), and computes the signs of f in its mid-point (say, M). Then it proceeds as follows: If ⁠ sgn ⁡ f ( M ) = sgn ⁡ ( A ) {\displaystyle \operatorname {sgn} f(M)=\operatorname {sgn}(A)} ⁠, then A is replaced by M, and we get a smaller characteristic polyhedron. If ⁠ sgn ⁡ f ( M ) = sgn ⁡ ( B ) {\displaystyle \operatorname {sgn} f(M)=\operatorname {sgn}(B)} ⁠, then B is replaced by M, and we get a smaller characteristic polyhedron. Else, we pick a new proper edge and try again. Suppose the diameter (= length of longest proper edge) of the original characteristic polyhedron is D. Then, at least log 2 ⁡ ( D / ε ) {\displaystyle \log _{2}(D/\varepsilon )} bisections of edges are required so that the diameter of the remaining polygon will be at most ε.: 11, Lemma.4.7  If the topological degree of the initial polyhedron is not zero, then there is a procedure that can choose an edge such that the next polyhedron also has nonzero degree. == See also == Binary search algorithm Lehmer–Schur algorithm, generalization of the bisection method in the complex plane Nested intervals == References == Burden, Richard L.; Faires, J. Douglas (2014). "2.1 The Bisection Algorithm". Numerical Analysis (10th ed.). Cengage Learning. ISBN 978-0-87150-857-7. == Further reading == Corliss, George (1977). "Which root does the bisection algorithm find?". SIAM Review. 19 (2): 325–327. doi:10.1137/1019044. ISSN 1095-7200. Kaw, Autar; Kalu, Egwu (2008). Numerical Methods with Applications (1st ed.). Archived from the original on 2009-04-13. == External links == Weisstein, Eric W. "Bisection". MathWorld. Bisection Method Notes, PPT, Mathcad, Maple, Matlab, Mathematica from Holistic Numerical Methods Institute
Wikipedia/Bisection_method
In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has an order of convergence of d + 1. These methods are named after the American mathematician Alston Scott Householder. The case of d = 1 corresponds to Newton's method; the case of d = 2 corresponds to Halley's method. == Method == Householder's method is a numerical algorithm for solving the equation f(x) = 0. In this case, the function f has to be a function of one real variable. The method consists of a sequence of iterations x n + 1 = x n + d ( 1 / f ) ( d − 1 ) ( x n ) ( 1 / f ) ( d ) ( x n ) {\displaystyle x_{n+1}=x_{n}+d\;{\frac {\left(1/f\right)^{(d-1)}(x_{n})}{\left(1/f\right)^{(d)}(x_{n})}}} beginning with an initial guess x0. If f is a d + 1 times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy: | x n + 1 − a | ≤ K ⋅ | x n − a | d + 1 {\displaystyle |x_{n+1}-a|\leq K\cdot {|x_{n}-a|}^{d+1}} , for some K > 0. {\displaystyle K>0.\!} This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order d + 1 or better. Furthermore, when close enough to a, it commonly is the case that x n + 1 − a ≈ C ( x n − a ) d + 1 {\displaystyle x_{n+1}-a\approx C(x_{n}-a)^{d+1}} for some C ≠ 0 {\displaystyle C\neq 0} . In particular, if d + 1 is even and C > 0 then convergence to a will be from values greater than a; if d + 1 is even and C < 0 then convergence to a will be from values less than a; if d + 1 is odd and C > 0 then convergence to a will be from the side where it starts; and if d + 1 is odd and C < 0 then convergence to a will alternate sides. Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large d. The Ostrowski index expresses the error reduction in the number of function evaluations instead of the iteration count. For polynomials, the evaluation of the first d derivatives of f at xn using Horner's method has an effort of d + 1 polynomial evaluations. Since n(d + 1) evaluations over n iterations give an error exponent of (d + 1)n, the exponent for one function evaluation is d + 1 d + 1 {\displaystyle {\sqrt[{d+1}]{d+1}}} , numerically 1.4142, 1.4422, 1.4142, 1.3797 for d = 1, 2, 3, 4, and falling after that. By this criterion, the d=2 case (Halley's method) is the optimal value of d. For general functions the derivative evaluation using the Taylor arithmetic of automatic differentiation requires the equivalent of (d + 1)(d + 2)/2 function evaluations. One function evaluation thus reduces the error by an exponent of d + 1 ( d + 1 ) ( d + 2 ) 2 {\displaystyle {\sqrt[{\frac {(d+1)(d+2)}{2}}]{d+1}}} , which is 2 3 ≈ 1.2599 {\displaystyle {\sqrt[{3}]{2}}\approx 1.2599} for Newton's method, 3 6 ≈ 1.2009 {\displaystyle {\sqrt[{6}]{3}}\approx 1.2009} for Halley's method and falling towards 1 or linear convergence for the higher order methods. == Motivation == === First approach === Suppose f is analytic in a neighborhood of a and f(a) = 0. Then f has a Taylor series at a and its constant term is zero. Because this constant term is zero, the function f(x) / (x − a) will have a Taylor series at a and, when f ′ (a) ≠ 0, its constant term will not be zero. Because that constant term is not zero, it follows that the reciprocal (x − a) / f(x) has a Taylor series at a, which we will write as ∑ k = 0 ∞ c k ( x − a ) k k ! {\displaystyle \sum _{k=0}^{\infty }{\frac {c_{k}(x-a)^{k}}{k!}}} and its constant term c0 will not be zero. Using that Taylor series we can write 1 f = c 0 x − a + ∑ k = 1 ∞ c k ( x − a ) k − 1 k ( k − 1 ) ! . {\displaystyle {\frac {1}{f}}={\frac {c_{0}}{x-a}}+\sum _{k=1}^{\infty }{\frac {c_{k}(x-a)^{k-1}}{k~(k-1)!}}\,.} When we compute its d-th derivative, we note that the terms for k = 1, ..., d conveniently vanish: ( 1 f ) ( d ) = ( − 1 ) d d ! c 0 ( x − a ) d + 1 + ∑ k = d + 1 ∞ c k ( x − a ) k − d − 1 k ( k − d − 1 ) ! {\displaystyle \left({\frac {1}{f}}\right)^{(d)}={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}+\sum _{k=d+1}^{\infty }{\frac {c_{k}(x-a)^{k-d-1}}{k~(k-d-1)!}}} = ( − 1 ) d d ! c 0 ( x − a ) d + 1 ( 1 + 1 ( − 1 ) d d ! c 0 ∑ k = d + 1 ∞ c k ( x − a ) k k ( k − d − 1 ) ! ) {\displaystyle ={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}\left(1+{\frac {1}{(-1)^{d}d!~c_{0}}}\sum _{k=d+1}^{\infty }{\frac {c_{k}(x-a)^{k}}{k~(k-d-1)!}}\right)} = ( − 1 ) d d ! c 0 ( x − a ) d + 1 ( 1 + O ( ( x − a ) d + 1 ) ) , {\displaystyle ={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}\left(1+{\mathcal {O}}\left((x-a)^{d+1}\right)\right)\,,} using big O notation. We thus get that the correction term that we add to x = xn to get a value of xn+1 that is closer to a is: d ( 1 / f ) ( d − 1 ) ( 1 / f ) ( d ) = d ( − 1 ) d − 1 ( d − 1 ) ! c 0 ( − 1 ) d d ! c 0 ( x − a ) ( 1 + O ( ( x − a ) d ) 1 + O ( ( x − a ) d + 1 ) ) {\displaystyle d~{\frac {(1/f)^{(d-1)}}{(1/f)^{(d)}}}=d~{\frac {(-1)^{d-1}(d-1)!~c_{0}}{(-1)^{d}d!~c_{0}}}(x-a)\left({\frac {1+{\mathcal {O}}\left((x-a)^{d}\right)}{1+{\mathcal {O}}\left((x-a)^{d+1}\right)}}\right)} = − ( ( x − a ) + O ( ( x − a ) d + 1 ) ) . {\displaystyle =-\left((x-a)+{\mathcal {O}}\left((x-a)^{d+1}\right)\right)\,.} Thus, x + d ( 1 / f ) ( d − 1 ) ( 1 / f ) ( d ) {\displaystyle x+d~{\frac {(1/f)^{(d-1)}}{(1/f)^{(d)}}}} goes to ⁠ a + O ( ( x − a ) d ) {\displaystyle a+{\mathcal {O}}\left((x-a)^{d}\right)} ⁠. === Second approach === Suppose x = a is a simple root. Then near x = a, (1/f)(x) is a meromorphic function. Suppose we have the Taylor expansion: ( 1 / f ) ( x ) = ∑ d = 0 ∞ ( 1 / f ) ( d ) ( b ) d ! ( x − b ) d {\displaystyle (1/f)(x)=\sum _{d=0}^{\infty }{\frac {(1/f)^{(d)}(b)}{d!}}(x-b)^{d}} around a point b that is closer to a than it is to any other zero of f. By König's theorem, we have: a − b = lim d → ∞ ( 1 / f ) ( d − 1 ) ( b ) ( d − 1 ) ! ( 1 / f ) ( d ) ( b ) d ! = d ( 1 / f ) ( d − 1 ) ( b ) ( 1 / f ) ( d ) ( b ) . {\displaystyle a-b=\lim _{d\rightarrow \infty }{\frac {\frac {(1/f)^{(d-1)}(b)}{(d-1)!}}{\frac {(1/f)^{(d)}(b)}{d!}}}=d{\frac {(1/f)^{(d-1)}(b)}{(1/f)^{(d)}(b)}}.} These suggest that Householder's iteration might be a good convergent iteration. The actual proof of the convergence is also based on these ideas. == The methods of lower order == Householder's method of order 1 is just Newton's method, since: x n + 1 = x n + 1 ( 1 / f ) ( x n ) ( 1 / f ) ( 1 ) ( x n ) = x n + 1 f ( x n ) ⋅ ( − f ′ ( x n ) f ( x n ) 2 ) − 1 = x n − f ( x n ) f ′ ( x n ) . {\displaystyle {\begin{array}{rl}x_{n+1}=&x_{n}+1\,{\frac {\left(1/f\right)(x_{n})}{\left(1/f\right)^{(1)}(x_{n})}}\\[.7em]=&x_{n}+{\frac {1}{f(x_{n})}}\cdot \left({\frac {-f'(x_{n})}{f(x_{n})^{2}}}\right)^{-1}\\[.7em]=&x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.\end{array}}} For Householder's method of order 2 one gets Halley's method, since the identities ( 1 / f ) ′ ( x ) = − f ′ ( x ) f ( x ) 2 {\displaystyle \textstyle (1/f)'(x)=-{\frac {f'(x)}{f(x)^{2}}}\ } and ( 1 / f ) ″ ( x ) = − f ″ ( x ) f ( x ) 2 + 2 f ′ ( x ) 2 f ( x ) 3 {\displaystyle \textstyle \ (1/f)''(x)=-{\frac {f''(x)}{f(x)^{2}}}+2{\frac {f'(x)^{2}}{f(x)^{3}}}} result in x n + 1 = x n + 2 ( 1 / f ) ′ ( x n ) ( 1 / f ) ″ ( x n ) = x n + − 2 f ( x n ) f ′ ( x n ) − f ( x n ) f ″ ( x n ) + 2 f ′ ( x n ) 2 = x n − f ( x n ) f ′ ( x n ) f ′ ( x n ) 2 − 1 2 f ( x n ) f ″ ( x n ) = x n + h n 1 1 + 1 2 ( f ″ / f ′ ) ( x n ) h n . {\displaystyle {\begin{array}{rl}x_{n+1}=&x_{n}+2\,{\frac {\left(1/f\right)'(x_{n})}{\left(1/f\right)''(x_{n})}}\\[1em]=&x_{n}+{\frac {-2f(x_{n})\,f'(x_{n})}{-f(x_{n})f''(x_{n})+2f'(x_{n})^{2}}}\\[1em]=&x_{n}-{\frac {f(x_{n})f'(x_{n})}{f'(x_{n})^{2}-{\tfrac {1}{2}}f(x_{n})f''(x_{n})}}\\[1em]=&x_{n}+h_{n}\;{\frac {1}{1+{\frac {1}{2}}(f''/f')(x_{n})\,h_{n}}}.\end{array}}} In the last line, h n = − f ( x n ) f ′ ( x n ) {\displaystyle h_{n}=-{\tfrac {f(x_{n})}{f'(x_{n})}}} is the update of the Newton iteration at the point x n {\displaystyle x_{n}} . This line was added to demonstrate where the difference to the simple Newton's method lies. The third order method is obtained from the identity of the third order derivative of 1/f ( 1 / f ) ‴ ( x ) = − f ‴ ( x ) f ( x ) 2 + 6 f ′ ( x ) f ″ ( x ) f ( x ) 3 − 6 f ′ ( x ) 3 f ( x ) 4 {\displaystyle \textstyle (1/f)'''(x)=-{\frac {f'''(x)}{f(x)^{2}}}+6{\frac {f'(x)\,f''(x)}{f(x)^{3}}}-6{\frac {f'(x)^{3}}{f(x)^{4}}}} and has the formula x n + 1 = x n + 3 ( 1 / f ) ″ ( x n ) ( 1 / f ) ‴ ( x n ) = x n − 6 f ( x n ) f ′ ( x n ) 2 − 3 f ( x n ) 2 f ″ ( x n ) 6 f ′ ( x n ) 3 − 6 f ( x n ) f ′ ( x n ) f ″ ( x n ) + f ( x n ) 2 f ‴ ( x n ) = x n + h n 1 + 1 2 ( f ″ / f ′ ) ( x n ) h n 1 + ( f ″ / f ′ ) ( x n ) h n + 1 6 ( f ‴ / f ′ ) ( x n ) h n 2 {\displaystyle {\begin{array}{rl}x_{n+1}=&x_{n}+3\,{\frac {\left(1/f\right)''(x_{n})}{\left(1/f\right)'''(x_{n})}}\\[1em]=&x_{n}-{\frac {6f(x_{n})\,f'(x_{n})^{2}-3f(x_{n})^{2}f''(x_{n})}{6f'(x_{n})^{3}-6f(x_{n})f'(x_{n})\,f''(x_{n})+f(x_{n})^{2}\,f'''(x_{n})}}\\[1em]=&x_{n}+h_{n}{\frac {1+{\frac {1}{2}}(f''/f')(x_{n})\,h_{n}}{1+(f''/f')(x_{n})\,h_{n}+{\frac {1}{6}}(f'''/f')(x_{n})\,h_{n}^{2}}}\end{array}}} and so on. == Example == The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation y 3 − 2 y − 5 = 0 {\displaystyle y^{3}-2y-5=0} . He observed that there should be a solution close to 2. Replacing y = x + 2 transforms the equation into 0 = f ( x ) = − 1 + 10 x + 6 x 2 + x 3 {\displaystyle 0=f(x)=-1+10x+6x^{2}+x^{3}} . The Taylor series of the reciprocal function starts with 1 / f ( x ) = − 1 − 10 x − 106 x 2 − 1121 x 3 − 11856 x 4 − 125392 x 5 − 1326177 x 6 − 14025978 x 7 − 148342234 x 8 − 1568904385 x 9 − 16593123232 x 10 + O ( x 11 ) {\displaystyle {\begin{array}{rl}1/f(x)=&-1-10\,x-106\,x^{2}-1121\,x^{3}-11856\,x^{4}-125392\,x^{5}\\&-1326177\,x^{6}-14025978\,x^{7}-148342234\,x^{8}-1568904385\,x^{9}\\&-16593123232\,x^{10}+O(x^{11})\end{array}}} The result of applying Householder's methods of various orders at x = 0 is also obtained by dividing neighboring coefficients of the latter power series. For the first orders one gets the following values after just one iteration step: For an example, in the case of the 3rd order, x 1 = 0.0 + 106 / 1121 = 0.09455842997324 {\displaystyle x_{1}=0.0+106/1121=0.09455842997324} . As one can see, there are a little bit more than d correct decimal places for each order d. The first one hundred digits of the correct solution are 0.0945514815423265914823865405793029638573061056282391803041285290453121899834836671462672817771577578. Let's calculate the x 2 , x 3 , x 4 {\displaystyle x_{2},x_{3},x_{4}} values for some lowest order, f = − 1 + 10 x + 6 x 2 + x 3 {\displaystyle f=-1+10x+6x^{2}+x^{3}} f ′ = 10 + 12 x + 3 x 2 {\displaystyle f^{\prime }=10+12x+3x^{2}} f ′ ′ = 12 + 6 x {\displaystyle f^{\prime \prime }=12+6x} f ′ ′ ′ = 6 {\displaystyle f^{\prime \prime \prime }=6} And using following relations, 1st order; x i + 1 = x i − f ( x i ) / f ′ ( x i ) {\displaystyle x_{i+1}=x_{i}-f(x_{i})/f^{\prime }(x_{i})} 2nd order; x i + 1 = x i − 2 f f ′ / ( 2 f ′ 2 − f f ′ ′ ) {\displaystyle x_{i+1}=x_{i}-2ff^{\prime }/(2{f^{\prime }}^{2}-ff^{\prime \prime })} 3rd order; x i + 1 = x i − ( 6 f f ′ 2 − 3 f 2 f ′ ′ ) / ( 6 f ′ 3 − 6 f f ′ f ′ ′ + f 2 f ′ ′ ′ ) {\displaystyle x_{i+1}=x_{i}-(6f{f^{\prime }}^{2}-3f^{2}f^{\prime \prime })/(6{f^{\prime }}^{3}-6ff^{\prime }f^{\prime \prime }+f^{2}f^{\prime \prime \prime })} == Derivation == An exact derivation of Householder's methods starts from the Padé approximation of order d + 1 of the function, where the approximant with linear numerator is chosen. Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator. The Padé approximation has the form f ( x + h ) = a 0 + h b 0 + b 1 h + ⋯ + b d − 1 h d − 1 + O ( h d + 1 ) . {\displaystyle f(x+h)={\frac {a_{0}+h}{b_{0}+b_{1}h+\cdots +b_{d-1}h^{d-1}}}+O(h^{d+1}).} The rational function has a zero at h = − a 0 {\displaystyle h=-a_{0}} . Just as the Taylor polynomial of degree d has d + 1 coefficients that depend on the function f, the Padé approximation also has d + 1 coefficients dependent on f and its derivatives. More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant. Therefore, b d = 0 {\displaystyle b_{d}=0} has to hold. One could determine the Padé approximant starting from the Taylor polynomial of f using Euclid's algorithm. However, starting from the Taylor polynomial of 1/f is shorter and leads directly to the given formula. Since ( 1 / f ) ( x + h ) = ( 1 / f ) ( x ) + ( 1 / f ) ′ ( x ) h + ⋯ + ( 1 / f ) ( d − 1 ) ( x ) h d − 1 ( d − 1 ) ! + ( 1 / f ) ( d ) ( x ) h d d ! + O ( h d + 1 ) {\displaystyle (1/f)(x+h)=(1/f)(x)+(1/f)'(x)h+\cdots +(1/f)^{(d-1)}(x){\frac {h^{d-1}}{(d-1)!}}+(1/f)^{(d)}(x){\frac {h^{d}}{d!}}+O(h^{d+1})} has to be equal to the inverse of the desired rational function, we get after multiplying with a 0 + h {\displaystyle a_{0}+h} in the power h d {\displaystyle h^{d}} the equation 0 = b d = a 0 ( 1 / f ) ( d ) ( x ) 1 d ! + ( 1 / f ) ( d − 1 ) ( x ) 1 ( d − 1 ) ! {\displaystyle 0=b_{d}=a_{0}(1/f)^{(d)}(x){\frac {1}{d!}}+(1/f)^{(d-1)}(x){\frac {1}{(d-1)!}}} . Now, solving the last equation for the zero h = − a 0 {\displaystyle h=-a_{0}} of the numerator results in h = − a 0 = 1 ( d − 1 ) ! ( 1 / f ) ( d − 1 ) ( x ) 1 d ! ( 1 / f ) ( d ) ( x ) = d ( 1 / f ) ( d − 1 ) ( x ) ( 1 / f ) ( d ) ( x ) {\displaystyle {\begin{aligned}h&=-a_{0}={\frac {{\frac {1}{(d-1)!}}(1/f)^{(d-1)}(x)}{{\frac {1}{d!}}(1/f)^{(d)}(x)}}\\&=d\,{\frac {(1/f)^{(d-1)}(x)}{(1/f)^{(d)}(x)}}\end{aligned}}} . This implies the iteration formula x n + 1 = x n + d ( 1 / f ) ( d − 1 ) ( x n ) ( 1 / f ) ( d ) ( x n ) {\displaystyle x_{n+1}=x_{n}+d\;{\frac {\left(1/f\right)^{(d-1)}(x_{n})}{\left(1/f\right)^{(d)}(x_{n})}}} . == Relation to Newton's method == Householder's method applied to the real-valued function f(x) is the same as applying Newton's method x n + 1 = x n − g ( x n ) g ′ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}} to find the zeros of the function: g ( x ) = | ( 1 / f ) ( d − 1 ) | − 1 / d . {\displaystyle g(x)=\left|(1/f)^{(d-1)}\right|^{-1/d}\,.} In particular, d = 1 gives Newton's method unmodified and d = 2 gives Halley's method. == References == == External links == Pascal Sebah and Xavier Gourdon (2001). "Newton's method and high order iteration". Note: Use the PostScript version of this link; the website version is not compiled correctly.
Wikipedia/Householder's_method
The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders". This article describes the complex variant. Given a polynomial P, P ( z ) = ∑ i = 0 n a i z n − i , a 0 = 1 , a n ≠ 0 {\displaystyle P(z)=\sum _{i=0}^{n}a_{i}z^{n-i},\quad a_{0}=1,\quad a_{n}\neq 0} with complex coefficients it computes approximations to the n zeros α 1 , α 2 , … , α n {\displaystyle \alpha _{1},\alpha _{2},\dots ,\alpha _{n}} of P(z), one at a time in roughly increasing order of magnitude. After each root is computed, its linear factor is removed from the polynomial. Using this deflation guarantees that each root is computed only once and that all roots are found. The real variant follows the same pattern, but computes two roots at a time, either two real roots or a pair of conjugate complex roots. By avoiding complex arithmetic, the real variant can be faster (by a factor of 4) than the complex variant. The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type. == Overview == The Jenkins–Traub algorithm calculates all of the roots of a polynomial with complex coefficients. The algorithm starts by checking the polynomial for the occurrence of very large or very small roots. If necessary, the coefficients are rescaled by a rescaling of the variable. In the algorithm, proper roots are found one by one and generally in increasing size. After each root is found, the polynomial is deflated by dividing off the corresponding linear factor. Indeed, the factorization of the polynomial into the linear factor and the remaining deflated polynomial is already a result of the root-finding procedure. The root-finding procedure has three stages that correspond to different variants of the inverse power iteration. See Jenkins and Traub. A description can also be found in Ralston and Rabinowitz p. 383. The algorithm is similar in spirit to the two-stage algorithm studied by Traub. === Root-finding procedure === Starting with the current polynomial P(X) of degree n, the aim is to compute the smallest root α {\displaystyle \alpha } of P(x). The polynomial can then be split into a linear factor and the remaining polynomial factor P ( X ) = ( X − α ) H ¯ ( X ) {\displaystyle P(X)=(X-\alpha ){\bar {H}}(X)} Other root-finding methods strive primarily to improve the root and thus the first factor. The main idea of the Jenkins-Traub method is to incrementally improve the second factor. To that end, a sequence of so-called H polynomials is constructed. These polynomials are all of degree n − 1 and are supposed to converge to the factor H ¯ ( X ) {\displaystyle {\bar {H}}(X)} of P(X) containing (the linear factors of) all the remaining roots. The sequence of H polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of H ¯ {\displaystyle {\bar {H}}} polynomials that keeps the coefficients in a numerically sensible range. The construction of the H polynomials ( H ( λ ) ( z ) ) λ = 0 , 1 , 2 , … {\displaystyle \left(H^{(\lambda )}(z)\right)_{\lambda =0,1,2,\dots }} is guided by a sequence of complex numbers ( s λ ) λ = 0 , 1 , 2 , … {\displaystyle (s_{\lambda })_{\lambda =0,1,2,\dots }} called shifts. These shifts themselves depend, at least in the third stage, on the previous H polynomials. The H polynomials are defined as the solution to the implicit recursion H ( 0 ) ( z ) = P ′ ( z ) {\displaystyle H^{(0)}(z)=P^{\prime }(z)} and ( X − s λ ) ⋅ H ( λ + 1 ) ( X ) ≡ H ( λ ) ( X ) ( mod P ( X ) ) . {\displaystyle (X-s_{\lambda })\cdot H^{(\lambda +1)}(X)\equiv H^{(\lambda )}(X){\pmod {P(X)}}\ .} A direct solution to this implicit equation is H ( λ + 1 ) ( X ) = 1 X − s λ ⋅ ( H ( λ ) ( X ) − H ( λ ) ( s λ ) P ( s λ ) P ( X ) ) , {\displaystyle H^{(\lambda +1)}(X)={\frac {1}{X-s_{\lambda }}}\cdot \left(H^{(\lambda )}(X)-{\frac {H^{(\lambda )}(s_{\lambda })}{P(s_{\lambda })}}P(X)\right)\,,} where the polynomial division is exact. Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at s λ {\displaystyle s_{\lambda }} and obtain the quotients at the same time. With the resulting quotients p(X) and h(X) as intermediate results the next H polynomial is obtained as P ( X ) = p ( X ) ⋅ ( X − s λ ) + P ( s λ ) H ( λ ) ( X ) = h ( X ) ⋅ ( X − s λ ) + H ( λ ) ( s λ ) } ⟹ H ( λ + 1 ) ( z ) = h ( z ) − H ( λ ) ( s λ ) P ( s λ ) p ( z ) . {\displaystyle \left.{\begin{aligned}P(X)&=p(X)\cdot (X-s_{\lambda })+P(s_{\lambda })\\H^{(\lambda )}(X)&=h(X)\cdot (X-s_{\lambda })+H^{(\lambda )}(s_{\lambda })\\\end{aligned}}\right\}\implies H^{(\lambda +1)}(z)=h(z)-{\frac {H^{(\lambda )}(s_{\lambda })}{P(s_{\lambda })}}p(z).} Since the highest degree coefficient is obtained from P(X), the leading coefficient of H ( λ + 1 ) ( X ) {\displaystyle H^{(\lambda +1)}(X)} is − H ( λ ) ( s λ ) P ( s λ ) {\displaystyle -{\tfrac {H^{(\lambda )}(s_{\lambda })}{P(s_{\lambda })}}} . If this is divided out the normalized H polynomial is H ¯ ( λ + 1 ) ( X ) = 1 X − s λ ⋅ ( P ( X ) − P ( s λ ) H ( λ ) ( s λ ) H ( λ ) ( X ) ) = 1 X − s λ ⋅ ( P ( X ) − P ( s λ ) H ¯ ( λ ) ( s λ ) H ¯ ( λ ) ( X ) ) . {\displaystyle {\begin{aligned}{\bar {H}}^{(\lambda +1)}(X)&={\frac {1}{X-s_{\lambda }}}\cdot \left(P(X)-{\frac {P(s_{\lambda })}{H^{(\lambda )}(s_{\lambda })}}H^{(\lambda )}(X)\right)\\[1em]&={\frac {1}{X-s_{\lambda }}}\cdot \left(P(X)-{\frac {P(s_{\lambda })}{{\bar {H}}^{(\lambda )}(s_{\lambda })}}{\bar {H}}^{(\lambda )}(X)\right)\,.\end{aligned}}} ==== Stage one: no-shift process ==== For λ = 0 , 1 , … , M − 1 {\displaystyle \lambda =0,1,\dots ,M-1} set s λ = 0 {\displaystyle s_{\lambda }=0} . Usually M=5 is chosen for polynomials of moderate degrees up to n = 50. This stage is not necessary from theoretical considerations alone, but is useful in practice. It emphasizes in the H polynomials the cofactor(s) (of the linear factor) of the smallest root(s). ==== Stage two: fixed-shift process ==== The shift for this stage is determined as some point close to the smallest root of the polynomial. It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation R n + | a n − 1 | R n − 1 + ⋯ + | a 1 | R = | a 0 | . {\displaystyle R^{n}+|a_{n-1}|\,R^{n-1}+\dots +|a_{1}|\,R=|a_{0}|\,.} Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method. Now choose s = R ⋅ exp ⁡ ( i ϕ random ) {\displaystyle s=R\cdot \exp(i\,\phi _{\text{random}})} on the circle of this radius. The sequence of polynomials H ( λ + 1 ) ( z ) {\displaystyle H^{(\lambda +1)}(z)} , λ = M , M + 1 , … , L − 1 {\displaystyle \lambda =M,M+1,\dots ,L-1} , is generated with the fixed shift value s λ = s {\displaystyle s_{\lambda }=s} . This creates an asymmetry relative to the previous stage which increases the chance that the H polynomial moves towards the cofactor of a single root. During this iteration, the current approximation for the root t λ = s − P ( s ) H ¯ ( λ ) ( s ) {\displaystyle t_{\lambda }=s-{\frac {P(s)}{{\bar {H}}^{(\lambda )}(s)}}} is traced. The second stage is terminated as successful if the conditions | t λ + 1 − t λ | < 1 2 | t λ | {\displaystyle |t_{\lambda +1}-t_{\lambda }|<{\tfrac {1}{2}}\,|t_{\lambda }|} and | t λ − t λ − 1 | < 1 2 | t λ − 1 | {\displaystyle |t_{\lambda }-t_{\lambda -1}|<{\tfrac {1}{2}}\,|t_{\lambda -1}|} are simultaneously met. This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots. If there was no success after some number of iterations, a different random point on the circle is tried. Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures. ==== Stage three: variable-shift process ==== The H ( λ + 1 ) ( X ) {\displaystyle H^{(\lambda +1)}(X)} polynomials are now generated using the variable shifts s λ , λ = L , L + 1 , … {\displaystyle s_{\lambda },\quad \lambda =L,L+1,\dots } which are generated by s L = t L = s − P ( s ) H ¯ ( L ) ( s ) {\displaystyle s_{L}=t_{L}=s-{\frac {P(s)}{{\bar {H}}^{(L)}(s)}}} being the last root estimate of the second stage and s λ + 1 = s λ − P ( s λ ) H ¯ ( λ + 1 ) ( s λ ) , λ = L , L + 1 , … , {\displaystyle s_{\lambda +1}=s_{\lambda }-{\frac {P(s_{\lambda })}{{\bar {H}}^{(\lambda +1)}(s_{\lambda })}},\quad \lambda =L,L+1,\dots ,} where H ¯ ( λ + 1 ) ( z ) {\displaystyle {\bar {H}}^{(\lambda +1)}(z)} is the normalized H polynomial, that is H ( λ ) ( z ) {\displaystyle H^{(\lambda )}(z)} divided by its leading coefficient. If the step size in stage three does not fall fast enough to zero, then stage two is restarted using a different random point. If this does not succeed after a small number of restarts, the number of steps in stage two is doubled. ==== Convergence ==== It can be shown that, provided L is chosen sufficiently large, sλ always converges to a root of P. The algorithm converges for any distribution of roots, but may fail to find all roots of the polynomial. Furthermore, the convergence is slightly faster than the quadratic convergence of the Newton–Raphson method, however, it uses one-and-half as many operations per step, two polynomial evaluations for Newton vs. three polynomial evaluations in the third stage. == What gives the algorithm its power? == Compare with the Newton–Raphson iteration z i + 1 = z i − P ( z i ) P ′ ( z i ) . {\displaystyle z_{i+1}=z_{i}-{\frac {P(z_{i})}{P^{\prime }(z_{i})}}.} The iteration uses the given P and P ′ {\displaystyle \scriptstyle P^{\prime }} . In contrast the third-stage of Jenkins–Traub s λ + 1 = s λ − P ( s λ ) H ¯ λ + 1 ( s λ ) = s λ − W λ ( s λ ) ( W λ ) ′ ( s λ ) {\displaystyle s_{\lambda +1}=s_{\lambda }-{\frac {P(s_{\lambda })}{{\bar {H}}^{\lambda +1}(s_{\lambda })}}=s_{\lambda }-{\frac {W^{\lambda }(s_{\lambda })}{(W^{\lambda })'(s_{\lambda })}}} is precisely a Newton–Raphson iteration performed on certain rational functions. More precisely, Newton–Raphson is being performed on a sequence of rational functions W λ ( z ) = P ( z ) H λ ( z ) . {\displaystyle W^{\lambda }(z)={\frac {P(z)}{H^{\lambda }(z)}}.} For λ {\displaystyle \lambda } sufficiently large, P ( z ) H ¯ λ ( z ) = W λ ( z ) L C ( H λ ) {\displaystyle {\frac {P(z)}{{\bar {H}}^{\lambda }(z)}}=W^{\lambda }(z)\,LC(H^{\lambda })} is as close as desired to a first degree polynomial z − α 1 , {\displaystyle z-\alpha _{1},\,} where α 1 {\displaystyle \alpha _{1}} is one of the zeros of P {\displaystyle P} . Even though Stage 3 is precisely a Newton–Raphson iteration, differentiation is not performed. === Analysis of the H polynomials === Let α 1 , … , α n {\displaystyle \alpha _{1},\dots ,\alpha _{n}} be the roots of P(X). The so-called Lagrange factors of P(X) are the cofactors of these roots, P m ( X ) = P ( X ) − P ( α m ) X − α m . {\displaystyle P_{m}(X)={\frac {P(X)-P(\alpha _{m})}{X-\alpha _{m}}}.} If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most n − 1. By analysis of the recursion procedure one finds that the H polynomials have the coordinate representation H ( λ ) ( X ) = ∑ m = 1 n [ ∏ κ = 0 λ − 1 ( α m − s κ ) ] − 1 P m ( X ) . {\displaystyle H^{(\lambda )}(X)=\sum _{m=1}^{n}\left[\prod _{\kappa =0}^{\lambda -1}(\alpha _{m}-s_{\kappa })\right]^{-1}\,P_{m}(X)\ .} Each Lagrange factor has leading coefficient 1, so that the leading coefficient of the H polynomials is the sum of the coefficients. The normalized H polynomials are thus H ¯ ( λ ) ( X ) = ∑ m = 1 n [ ∏ κ = 0 λ − 1 ( α m − s κ ) ] − 1 P m ( X ) ∑ m = 1 n [ ∏ κ = 0 λ − 1 ( α m − s κ ) ] − 1 = P 1 ( X ) + ∑ m = 2 n [ ∏ κ = 0 λ − 1 α 1 − s κ α m − s κ ] P m ( X ) 1 + ∑ m = 1 n [ ∏ κ = 0 λ − 1 α 1 − s κ α m − s κ ] . {\displaystyle {\bar {H}}^{(\lambda )}(X)={\frac {\sum _{m=1}^{n}\left[\prod _{\kappa =0}^{\lambda -1}(\alpha _{m}-s_{\kappa })\right]^{-1}\,P_{m}(X)}{\sum _{m=1}^{n}\left[\prod _{\kappa =0}^{\lambda -1}(\alpha _{m}-s_{\kappa })\right]^{-1}}}={\frac {P_{1}(X)+\sum _{m=2}^{n}\left[\prod _{\kappa =0}^{\lambda -1}{\frac {\alpha _{1}-s_{\kappa }}{\alpha _{m}-s_{\kappa }}}\right]\,P_{m}(X)}{1+\sum _{m=1}^{n}\left[\prod _{\kappa =0}^{\lambda -1}{\frac {\alpha _{1}-s_{\kappa }}{\alpha _{m}-s_{\kappa }}}\right]}}\ .} === Convergence orders === If the condition | α 1 − s κ | < min m = 2 , 3 , … , n | α m − s κ | {\displaystyle |\alpha _{1}-s_{\kappa }|<\min {}_{m=2,3,\dots ,n}|\alpha _{m}-s_{\kappa }|} holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards P 1 ( X ) {\displaystyle P_{1}(X)} . Under the condition that | α 1 | < | α 2 | = min m = 2 , 3 , … , n | α m | {\displaystyle |\alpha _{1}|<|\alpha _{2}|=\min {}_{m=2,3,\dots ,n}|\alpha _{m}|} one gets the asymptotic estimates for stage 1: H ( λ ) ( X ) = P 1 ( X ) + O ( | α 1 α 2 | λ ) . {\displaystyle H^{(\lambda )}(X)=P_{1}(X)+O\left(\left|{\frac {\alpha _{1}}{\alpha _{2}}}\right|^{\lambda }\right).} for stage 2, if s is close enough to α 1 {\displaystyle \alpha _{1}} : H ( λ ) ( X ) = P 1 ( X ) + O ( | α 1 α 2 | M ⋅ | α 1 − s α 2 − s | λ − M ) {\displaystyle H^{(\lambda )}(X)=P_{1}(X)+O\left(\left|{\frac {\alpha _{1}}{\alpha _{2}}}\right|^{M}\cdot \left|{\frac {\alpha _{1}-s}{\alpha _{2}-s}}\right|^{\lambda -M}\right)} and s − P ( s ) H ¯ ( λ ) ( s ) = α 1 + O ( … ⋅ | α 1 − s | ) . {\displaystyle s-{\frac {P(s)}{{\bar {H}}^{(\lambda )}(s)}}=\alpha _{1}+O\left(\ldots \cdot |\alpha _{1}-s|\right).} and for stage 3: H ( λ ) ( X ) = P 1 ( X ) + O ( ∏ κ = 0 λ − 1 | α 1 − s κ α 2 − s κ | ) {\displaystyle H^{(\lambda )}(X)=P_{1}(X)+O\left(\prod _{\kappa =0}^{\lambda -1}\left|{\frac {\alpha _{1}-s_{\kappa }}{\alpha _{2}-s_{\kappa }}}\right|\right)} and s λ + 1 = s λ − P ( s ) H ¯ ( λ + 1 ) ( s λ ) = α 1 + O ( ∏ κ = 0 λ − 1 | α 1 − s κ α 2 − s κ | ⋅ | α 1 − s λ | 2 | α 2 − s λ | ) {\displaystyle s_{\lambda +1}=s_{\lambda }-{\frac {P(s)}{{\bar {H}}^{(\lambda +1)}(s_{\lambda })}}=\alpha _{1}+O\left(\prod _{\kappa =0}^{\lambda -1}\left|{\frac {\alpha _{1}-s_{\kappa }}{\alpha _{2}-s_{\kappa }}}\right|\cdot {\frac {|\alpha _{1}-s_{\lambda }|^{2}}{|\alpha _{2}-s_{\lambda }|}}\right)} giving rise to a higher than quadratic convergence order of ϕ 2 = 1 + ϕ ≈ 2.61 {\displaystyle \phi ^{2}=1+\phi \approx 2.61} , where ϕ = 1 2 ( 1 + 5 ) {\displaystyle \phi ={\tfrac {1}{2}}(1+{\sqrt {5}})} is the golden ratio. === Interpretation as inverse power iteration === All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the n-dimensional space of polynomials of degree n − 1 or less. The principal idea of this map is to interpret the factorization P ( X ) = ( X − α 1 ) ⋅ P 1 ( X ) {\displaystyle P(X)=(X-\alpha _{1})\cdot P_{1}(X)} with a root α 1 ∈ C {\displaystyle \alpha _{1}\in \mathbb {C} } and P 1 ( X ) = P ( X ) / ( X − α 1 ) {\displaystyle P_{1}(X)=P(X)/(X-\alpha _{1})} the remaining factor of degree n − 1 as the eigenvector equation for the multiplication with the variable X, followed by remainder computation with divisor P(X), M X ( H ) = ( X ⋅ H ( X ) ) mod P ( X ) . {\displaystyle M_{X}(H)=(X\cdot H(X)){\bmod {P}}(X)\,.} This maps polynomials of degree at most n − 1 to polynomials of degree at most n − 1. The eigenvalues of this map are the roots of P(X), since the eigenvector equation reads 0 = ( M X − α ⋅ i d ) ( H ) = ( ( X − α ) ⋅ H ) mod P , {\displaystyle 0=(M_{X}-\alpha \cdot id)(H)=((X-\alpha )\cdot H){\bmod {P}}\,,} which implies that ( X − α ) ⋅ H = C ⋅ P ( X ) {\displaystyle (X-\alpha )\cdot H=C\cdot P(X)} , that is, ( X − α ) {\displaystyle (X-\alpha )} is a linear factor of P(X). In the monomial basis the linear map M X {\displaystyle M_{X}} is represented by a companion matrix of the polynomial P, as M X ( H ) = ∑ m = 0 n − 1 H m X m + 1 − H n − 1 ( X n + ∑ m = 0 n − 1 a m X m ) = ∑ m = 1 n − 1 ( H m − 1 − a m H n − 1 ) X m − a 0 H n − 1 , {\displaystyle M_{X}(H)=\sum _{m=0}^{n-1}H_{m}X^{m+1}-H_{n-1}\left(X^{n}+\sum _{m=0}^{n-1}a_{m}X^{m}\right)=\sum _{m=1}^{n-1}(H_{m-1}-a_{m}H_{n-1})X^{m}-a_{0}H_{n-1}\,,} the resulting transformation matrix is A = ( 0 0 … 0 − a 0 1 0 … 0 − a 1 0 1 … 0 − a 2 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 … 1 − a n − 1 ) . {\displaystyle A={\begin{pmatrix}0&0&\dots &0&-a_{0}\\1&0&\dots &0&-a_{1}\\0&1&\dots &0&-a_{2}\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\dots &1&-a_{n-1}\end{pmatrix}}\,.} To this matrix the inverse power iteration is applied in the three variants of no shift, constant shift and generalized Rayleigh shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same. == Real coefficients == The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration. The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order. == A connection with the shifted QR algorithm == There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues. See Dekker and Traub The shifted QR algorithm for Hermitian matrices. Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial. == Software and testing == The software for the Jenkins–Traub algorithm was published as Jenkins and Traub Algorithm 419: Zeros of a Complex Polynomial. The software for the real algorithm was published as Jenkins Algorithm 493: Zeros of a Real Polynomial. The methods have been extensively tested by many people. As predicted they enjoy faster than quadratic convergence for all distributions of zeros. However, there are polynomials which can cause loss of precision as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson, p. 64. The original polynomial was of degree 60 and suffered severe deflation instability. == References == == External links == A free downloadable Windows application using the Jenkins–Traub Method for polynomials with real and complex coefficients RPoly++ An SSE-Optimized C++ implementation of the RPOLY algorithm.
Wikipedia/Jenkins–Traub_algorithm
In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function f ( x ) {\displaystyle f(x)} . The method is due to C. Ridders. Ridders' method is simpler than Muller's method or Brent's method but with similar performance. The formula below converges quadratically when the function is well-behaved, which implies that the number of additional significant digits found at each step approximately doubles; but the function has to be evaluated twice for each step, so the overall order of convergence of the method with respect to function evaluations rather than with respect to number of iterates is 2 {\displaystyle {\sqrt {2}}} . If the function is not well-behaved, the root remains bracketed and the length of the bracketing interval at least halves on each iteration, so convergence is guaranteed. == Method == Given two values of the independent variable, x 0 {\displaystyle x_{0}} and x 2 {\displaystyle x_{2}} , which are on two different sides of the root being sought so that f ( x 0 ) f ( x 2 ) < 0 {\displaystyle f(x_{0})f(x_{2})<0} , the method begins by evaluating the function at the midpoint x 1 = ( x 0 + x 2 ) / 2 {\displaystyle x_{1}=(x_{0}+x_{2})/2} . One then finds the unique exponential function e a x {\displaystyle e^{ax}} such that function h ( x ) = f ( x ) e a x {\displaystyle h(x)=f(x)e^{ax}} satisfies h ( x 1 ) = ( h ( x 0 ) + h ( x 2 ) ) / 2 {\displaystyle h(x_{1})=(h(x_{0})+h(x_{2}))/2} . Specifically, parameter a {\displaystyle a} is determined by e a ( x 1 − x 0 ) = f ( x 1 ) − sign ⁡ [ f ( x 0 ) ] f ( x 1 ) 2 − f ( x 0 ) f ( x 2 ) f ( x 2 ) . {\displaystyle e^{a(x_{1}-x_{0})}={\frac {f(x_{1})-\operatorname {sign} [f(x_{0})]{\sqrt {f(x_{1})^{2}-f(x_{0})f(x_{2})}}}{f(x_{2})}}.} The false position method is then applied to the points ( x 0 , h ( x 0 ) ) {\displaystyle (x_{0},h(x_{0}))} and ( x 2 , h ( x 2 ) ) {\displaystyle (x_{2},h(x_{2}))} , leading to a new value x 3 {\displaystyle x_{3}} between x 0 {\displaystyle x_{0}} and x 2 {\displaystyle x_{2}} , x 3 = x 1 + ( x 1 − x 0 ) sign ⁡ [ f ( x 0 ) ] f ( x 1 ) f ( x 1 ) 2 − f ( x 0 ) f ( x 2 ) , {\displaystyle x_{3}=x_{1}+(x_{1}-x_{0}){\frac {\operatorname {sign} [f(x_{0})]f(x_{1})}{\sqrt {f(x_{1})^{2}-f(x_{0})f(x_{2})}}},} which will be used as one of the two bracketing values in the next step of the iteration. The other bracketing value is taken to be x 1 {\displaystyle x_{1}} if f ( x 1 ) f ( x 3 ) < 0 {\displaystyle f(x_{1})f(x_{3})<0} (which will be true in the well-behaved case), or otherwise whichever of x 0 {\displaystyle x_{0}} and x 2 {\displaystyle x_{2}} has a function value of opposite sign to f ( x 3 ) . {\displaystyle f(x_{3}).} The iterative procedure can be terminated when a target accuracy is obtained. == References ==
Wikipedia/Ridders'_method
In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem of algebra in terms of computational complexity (Technical report, Mathematisches Institut der Universität Tübingen). A revised algorithm was presented by Victor Pan in 1998. An implementation was provided by Xavier Gourdon in 1996 for the Magma and PARI/GP computer algebra systems. == General description == The fundamental idea of the splitting circle method is to use methods of complex analysis, more precisely the residue theorem, to construct factors of polynomials. With those methods it is possible to construct a factor of a given polynomial p ( x ) = x n + p n − 1 x n − 1 + ⋯ + p 0 {\displaystyle p(x)=x^{n}+p_{n-1}x^{n-1}+\cdots +p_{0}} for any region of the complex plane with a piecewise smooth boundary. Most of those factors will be trivial, that is constant polynomials. Only regions that contain roots of p(x) result in nontrivial factors that have exactly those roots of p(x) as their own roots, preserving multiplicity. In the numerical realization of this method one uses disks D(c,r) (center c, radius r) in the complex plane as regions. The boundary circle of a disk splits the set of roots of p(x) in two parts, hence the name of the method. To a given disk one computes approximate factors following the analytical theory and refines them using Newton's method. To avoid numerical instability one has to demand that all roots are well separated from the boundary circle of the disk. So to obtain a good splitting circle it should be embedded in a root free annulus A(c,r,R) (center c, inner radius r, outer radius R) with a large relative width R/r. Repeating this process for the factors found, one finally arrives at an approximative factorization of the polynomial at a required precision. The factors are either linear polynomials representing well isolated zeros or higher order polynomials representing clusters of zeros. == Details of the analytical construction == Newton's identities are a bijective relation between the elementary symmetric polynomials of a tuple of complex numbers and its sums of powers. Therefore, it is possible to compute the coefficients of a polynomial p ( x ) = x n + p n − 1 x n − 1 + ⋯ + p 0 = ( x − z 1 ) ⋯ ( x − z n ) {\displaystyle p(x)=x^{n}+p_{n-1}x^{n-1}+\cdots +p_{0}=(x-z_{1})\cdots (x-z_{n})} (or of a factor of it) from the sums of powers of its zeros t m = z 1 m + ⋯ + z n m , m = 0 , 1 , … , n {\displaystyle t_{m}=z_{1}^{m}+\cdots +z_{n}^{m}\,,\quad m=0,1,\dots ,n} by solving the triangular system that is obtained by comparing the powers of u in the following identity of formal power series a n − 1 + 2 a n − 2 u + ⋯ + ( n − 1 ) a 1 u n − 2 + n a 0 u n − 1 = − ( 1 + a n − 1 u + ⋯ + a 1 u n − 1 + a 0 u n ) ⋅ ( t 1 + t 2 u + t 3 u 2 + ⋯ + t n u n − 1 + ⋯ ) . {\displaystyle {\begin{aligned}&a_{n-1}+2\,a_{n-2}\,u+\cdots +(n-1)\,a_{1}\,u^{n-2}+n\,a_{0}\,u^{n-1}\\[1ex]&\quad =-(1+a_{n-1}\,u+\cdots +a_{1}\,u^{n-1}+a_{0}\,u^{n})\cdot (t_{1}+t_{2}\,u+t_{3}\,u^{2}+\dots +t_{n}\,u^{n-1}+\cdots ).\end{aligned}}} If G ⊂ C {\displaystyle G\subset \mathbb {C} } is a domain with piecewise smooth boundary C and if the zeros of p(x) are pairwise distinct and not on the boundary C, then from the residue theorem of residual calculus one gets 1 2 π i ∮ C p ′ ( z ) p ( z ) z m d z = ∑ z ∈ G : p ( z ) = 0 p ′ ( z ) z m p ′ ( z ) = ∑ z ∈ G : p ( z ) = 0 z m . {\displaystyle {\frac {1}{2\pi \,i}}\oint _{C}{\frac {p'(z)}{p(z)}}z^{m}\,dz=\sum _{z\in G:\,p(z)=0}{\frac {p'(z)z^{m}}{p'(z)}}=\sum _{z\in G:\,p(z)=0}z^{m}.} The identity of the left to the right side of this equation also holds for zeros with multiplicities. By using the Newton identities one is able to compute from those sums of powers the factor f ( x ) := ∏ z ∈ G : p ( z ) = 0 ( x − z ) {\displaystyle f(x):=\prod _{z\in G:\,p(z)=0}(x-z)} of p(x) corresponding to the zeros of p(x) inside G. By polynomial division one also obtains the second factor g(x) in p(x) = f(x)g(x). The commonly used regions are circles in the complex plane. Each circle gives raise to a split of the polynomial p(x) in factors f(x) and g(x). Repeating this procedure on the factors using different circles yields finer and finer factorizations. This recursion stops after a finite number of proper splits with all factors being nontrivial powers of linear polynomials. The challenge now consists in the conversion of this analytical procedure into a numerical algorithm with good running time. The integration is approximated by a finite sum of a numerical integration method, making use of the fast Fourier transform for the evaluation of the polynomials p(x) and p'(x). The polynomial f(x) that results will only be an approximate factor. To ensure that its zeros are close to the zeros of p inside G and only to those, one must demand that all zeros of p are far away from the boundary C of the region G. == Basic numerical observation == (Schönhage 1982) Let p ∈ C [ X ] {\displaystyle p\in \mathbb {C} [X]} be a polynomial of degree n which has k zeros inside the circle of radius 1/2 and the remaining n-k zeros outside the circle of radius 2. With N=O(k) large enough, the approximation of the contour integrals using N points results in an approximation f 0 {\displaystyle f_{0}} of the factor f with error ‖ f − f 0 ‖ ≤ 2 2 k − N n k 100 / 98 , {\displaystyle \|f-f_{0}\|\leq 2^{2k-N}\,nk\,100/98,} where the norm of a polynomial is the sum of the moduli of its coefficients. Since the zeros of a polynomial are continuous in its coefficients, one can make the zeros of f 0 {\displaystyle f_{0}} as close as wanted to the zeros of f by choosing N large enough. However, one can improve this approximation faster using a Newton method. Division of p with remainder yields an approximation g 0 {\displaystyle g_{0}} of the remaining factor g. Now p − f 0 g 0 = ( f − f 0 ) g 0 + ( g − g 0 ) f 0 + ( f − f 0 ) ( g − g 0 ) , {\displaystyle p-f_{0}g_{0}=(f-f_{0})g_{0}+(g-g_{0})f_{0}+(f-f_{0})(g-g_{0}),} so discarding the last second order term one has to solve p − f 0 g 0 = f 0 Δ g + g 0 Δ f {\displaystyle p-f_{0}g_{0}=f_{0}\Delta g+g_{0}\Delta f} using any variant of the extended Euclidean algorithm to obtain the incremented approximations f 1 = f 0 + Δ f {\displaystyle f_{1}=f_{0}+\Delta f} and g 1 = g 0 + Δ g {\displaystyle g_{1}=g_{0}+\Delta g} . This is repeated until the increments are zero relative to the chosen precision. == Graeffe iteration == The crucial step in this method is to find an annulus of relative width 4 in the complex plane that contains no zeros of p and contains approximately as many zeros of p inside as outside of it. Any annulus of this characteristic can be transformed, by translation and scaling of the polynomial, into the annulus between the radii 1/2 and 2 around the origin. But, not every polynomial admits such a splitting annulus. To remedy this situation, the Graeffe iteration is applied. It computes a sequence of polynomials p 0 = p , p j + 1 ( x ) = ( − 1 ) deg ⁡ p p j ( x ) p j ( − x ) , {\displaystyle p_{0}=p,\qquad p_{j+1}(x)=(-1)^{\deg p}p_{j}({\sqrt {x}})\,p_{j}(-{\sqrt {x}}),} where the roots of p j ( x ) {\displaystyle p_{j}(x)} are the 2 j {\displaystyle 2^{j}} -th dyadic powers of the roots of the initial polynomial p. By splitting p j ( x ) = e ( x 2 ) + x o ( x 2 ) {\displaystyle p_{j}(x)=e(x^{2})+x\,o(x^{2})} into even and odd parts, the succeeding polynomial is obtained by purely arithmetic operations as p j + 1 ( x ) = ( − 1 ) deg ⁡ p ( e ( x ) 2 − x o ( x ) 2 ) {\displaystyle p_{j+1}(x)=(-1)^{\deg p}(e(x)^{2}-x\,o(x)^{2})} . The ratios of the absolute moduli of the roots increase by the same power 2 j {\displaystyle 2^{j}} and thus tend to infinity. Choosing j large enough one finally finds a splitting annulus of relative width 4 around the origin. The approximate factorization of p j ( x ) ≈ f j ( x ) g j ( x ) {\displaystyle p_{j}(x)\approx f_{j}(x)\,g_{j}(x)} is now to be lifted back to the original polynomial. To this end an alternation of Newton steps and Padé approximations is used. It is easy to check that p j − 1 ( x ) g j ( x 2 ) ≈ f j − 1 ( x ) g j − 1 ( − x ) {\displaystyle {\frac {p_{j-1}(x)}{g_{j}(x^{2})}}\approx {\frac {f_{j-1}(x)}{g_{j-1}(-x)}}} holds. The polynomials on the left side are known in step j, the polynomials on the right side can be obtained as Padé approximants of the corresponding degrees for the power series expansion of the fraction on the left side. == Finding a good circle == Making use of the Graeffe iteration and any known estimate for the absolute value of the largest root one can find estimates R of this absolute value of any precision. Now one computes estimates for the largest and smallest distances R j > r j > 0 {\displaystyle R_{j}>r_{j}>0} of any root of p(x) to any of the five center points 0, 2R, −2R, 2Ri, −2Ri and selects the one with the largest ratio R j / r j {\displaystyle R_{j}/r_{j}} between the two. By this construction it can be guaranteed that R j / r j > e 0 . 3 ≈ 1.35 {\displaystyle R_{j}/r_{j}>e^{0{.}3}\approx 1.35} for at least one center. For such a center there has to be a root-free annulus of relative width e 0 . 3 / n ≈ 1 + 0 . 3 n {\textstyle e^{0{.}3/n}\approx 1+{\frac {0{.}3}{n}}} . After 3 + log 2 ⁡ ( n ) {\textstyle 3+\log _{2}(n)} Graeffe iterations, the corresponding annulus of the iterated polynomial has a relative width greater than 11 > 4, as required for the initial splitting described above (Schönhage 1982). After 4 + log 2 ⁡ ( n ) + log 2 ⁡ ( 2 + log 2 ⁡ ( n ) ) {\textstyle 4+\log _{2}(n)+\log _{2}(2+\log _{2}(n))} Graeffe iterations, the corresponding annulus has a relative width greater than 2 13 . 8 ⋅ n 6 . 9 > ( 64 ⋅ n 3 ) 2 {\textstyle 2^{13{.}8}\cdot n^{6{.}9}>(64\cdot n^{3})^{2}} , allowing a much simplified initial splitting (Malajovich & Zubelli 1997) To locate the best root-free annulus one uses a consequence of the Rouché theorem: For k = 1, ..., n − 1 the polynomial equation 0 = ∑ j ≠ k | p j | u j − | p k | u k , {\displaystyle \,0=\sum _{j\neq k}|p_{j}|u^{j}-|p_{k}|u^{k},} u > 0, has, by Descartes' rule of signs zero or two positive roots u k < v k {\displaystyle u_{k}<v_{k}} . In the latter case, there are exactly k roots inside the (closed) disk D ( 0 , u k ) {\displaystyle D(0,u_{k})} and A ( 0 , u k , v k ) {\displaystyle A(0,u_{k},v_{k})} is a root-free (open) annulus. == References ==
Wikipedia/Splitting_circle_method
Scoring algorithm, also known as Fisher's scoring, is a form of Newton's method used in statistics to solve maximum likelihood equations numerically, named after Ronald Fisher. == Sketch of derivation == Let Y 1 , … , Y n {\displaystyle Y_{1},\ldots ,Y_{n}} be random variables, independent and identically distributed with twice differentiable p.d.f. f ( y ; θ ) {\displaystyle f(y;\theta )} , and we wish to calculate the maximum likelihood estimator (M.L.E.) θ ∗ {\displaystyle \theta ^{*}} of θ {\displaystyle \theta } . First, suppose we have a starting point for our algorithm θ 0 {\displaystyle \theta _{0}} , and consider a Taylor expansion of the score function, V ( θ ) {\displaystyle V(\theta )} , about θ 0 {\displaystyle \theta _{0}} : V ( θ ) ≈ V ( θ 0 ) − J ( θ 0 ) ( θ − θ 0 ) , {\displaystyle V(\theta )\approx V(\theta _{0})-{\mathcal {J}}(\theta _{0})(\theta -\theta _{0}),\,} where J ( θ 0 ) = − ∑ i = 1 n ∇ ∇ ⊤ | θ = θ 0 log ⁡ f ( Y i ; θ ) {\displaystyle {\mathcal {J}}(\theta _{0})=-\sum _{i=1}^{n}\left.\nabla \nabla ^{\top }\right|_{\theta =\theta _{0}}\log f(Y_{i};\theta )} is the observed information matrix at θ 0 {\displaystyle \theta _{0}} . Now, setting θ = θ ∗ {\displaystyle \theta =\theta ^{*}} , using that V ( θ ∗ ) = 0 {\displaystyle V(\theta ^{*})=0} and rearranging gives us: θ ∗ ≈ θ 0 + J − 1 ( θ 0 ) V ( θ 0 ) . {\displaystyle \theta ^{*}\approx \theta _{0}+{\mathcal {J}}^{-1}(\theta _{0})V(\theta _{0}).\,} We therefore use the algorithm θ m + 1 = θ m + J − 1 ( θ m ) V ( θ m ) , {\displaystyle \theta _{m+1}=\theta _{m}+{\mathcal {J}}^{-1}(\theta _{m})V(\theta _{m}),\,} and under certain regularity conditions, it can be shown that θ m → θ ∗ {\displaystyle \theta _{m}\rightarrow \theta ^{*}} . == Fisher scoring == In practice, J ( θ ) {\displaystyle {\mathcal {J}}(\theta )} is usually replaced by I ( θ ) = E [ J ( θ ) ] {\displaystyle {\mathcal {I}}(\theta )=\mathrm {E} [{\mathcal {J}}(\theta )]} , the Fisher information, thus giving us the Fisher Scoring Algorithm: θ m + 1 = θ m + I − 1 ( θ m ) V ( θ m ) {\displaystyle \theta _{m+1}=\theta _{m}+{\mathcal {I}}^{-1}(\theta _{m})V(\theta _{m})} .. Under some regularity conditions, if θ m {\displaystyle \theta _{m}} is a consistent estimator, then θ m + 1 {\displaystyle \theta _{m+1}} (the correction after a single step) is 'optimal' in the sense that its error distribution is asymptotically identical to that of the true max-likelihood estimate. == See also == Score (statistics) Score test Fisher information == References == == Further reading == Jennrich, R. I. & Sampson, P. F. (1976). "Newton-Raphson and Related Algorithms for Maximum Likelihood Variance Component Estimation". Technometrics. 18 (1): 11–17. doi:10.1080/00401706.1976.10489395 (inactive 1 November 2024). JSTOR 1267911.{{cite journal}}: CS1 maint: DOI inactive as of November 2024 (link)
Wikipedia/Scoring_algorithm
In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense ferment in the development of quantum mechanics. The approximation is widely used in quantum chemistry to speed up the computation of molecular wavefunctions and other properties for large molecules. There are cases where the assumption of separable motion no longer holds, which make the approximation lose validity (it is said to "break down"), but even then the approximation is usually used as a starting point for more refined methods. In molecular spectroscopy, using the BO approximation means considering molecular energy as a sum of independent terms, e.g.: E total = E electronic + E vibrational + E rotational + E nuclear spin . {\displaystyle E_{\text{total}}=E_{\text{electronic}}+E_{\text{vibrational}}+E_{\text{rotational}}+E_{\text{nuclear spin}}.} These terms are of different orders of magnitude and the nuclear spin energy is so small that it is often omitted. The electronic energies E electronic {\displaystyle E_{\text{electronic}}} consist of kinetic energies, interelectronic repulsions, internuclear repulsions, and electron–nuclear attractions, which are the terms typically included when computing the electronic structure of molecules. == Example == The benzene molecule consists of 12 nuclei and 42 electrons. The Schrödinger equation, which must be solved to obtain the energy levels and wavefunction of this molecule, is a partial differential eigenvalue equation in the three-dimensional coordinates of the nuclei and electrons, giving 3 × 12 = 36 nuclear plus 3 × 42 = 126 electronic, totalling 162 variables for the wave function. The computational complexity, i.e., the computational power required to solve an eigenvalue equation, increases faster than the square of the number of coordinates. When applying the BO approximation, two smaller, consecutive steps can be used: For a given position of the nuclei, the electronic Schrödinger equation is solved, while treating the nuclei as stationary (not "coupled" with the dynamics of the electrons). This corresponding eigenvalue problem then consists only of the 126 electronic coordinates. This electronic computation is then repeated for other possible positions of the nuclei, i.e. deformations of the molecule. For benzene, this could be done using a grid of 36 possible nuclear position coordinates. The electronic energies on this grid are then connected to give a potential energy surface for the nuclei. This potential is then used for a second Schrödinger equation containing only the 36 coordinates of the nuclei. So, taking the most optimistic estimate for the complexity, instead of a large equation requiring at least 162 2 = 26 244 {\displaystyle 162^{2}=26\,244} hypothetical calculation steps, a series of smaller calculations requiring 126 2 N = 15 876 N {\displaystyle 126^{2}N=15\,876\,N} (with N being the number of grid points for the potential) and a very small calculation requiring 36 2 = 1296 {\displaystyle 36^{2}=1296} steps can be performed. In practice, the scaling of the problem is larger than n 2 {\displaystyle n^{2}} , and more approximations are applied in computational chemistry to further reduce the number of variables and dimensions. The slope of the potential energy surface can be used to simulate molecular dynamics, using it to express the mean force on the nuclei caused by the electrons and thereby skipping the calculation of the nuclear Schrödinger equation. == Detailed description == The BO approximation recognizes the large difference between the electron mass and the masses of atomic nuclei, and correspondingly the time scales of their motion. Given the same amount of momentum, the nuclei move much more slowly than the electrons. In mathematical terms, the BO approximation consists of expressing the wavefunction ( Ψ t o t a l {\displaystyle \Psi _{\mathrm {total} }} ) of a molecule as the product of an electronic wavefunction and a nuclear (vibrational, rotational) wavefunction. Ψ t o t a l = ψ e l e c t r o n i c ψ n u c l e a r {\displaystyle \Psi _{\mathrm {total} }=\psi _{\mathrm {electronic} }\psi _{\mathrm {nuclear} }} . This enables a separation of the Hamiltonian operator into electronic and nuclear terms, where cross-terms between electrons and nuclei are neglected, so that the two smaller and decoupled systems can be solved more efficiently. In the first step, the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions are no longer variable, but are constant parameters (they enter the equation "parametrically"). The electron–nucleus interactions are not removed, i.e., the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped-nuclei approximation.) The electronic Schrödinger equation H e ( r , R ) χ ( r , R ) = E e χ ( r , R ) {\displaystyle H_{\text{e}}(\mathbf {r} ,\mathbf {R} )\chi (\mathbf {r} ,\mathbf {R} )=E_{\text{e}}\chi (\mathbf {r} ,\mathbf {R} )} where χ ( r , R ) {\displaystyle \chi (\mathbf {r} ,\mathbf {R} )} , the electronic wavefunction for given positions of nuclei (fixed R), is solved approximately. The quantity r stands for all electronic coordinates and R for all nuclear coordinates. The electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): E e ( R ) {\displaystyle E_{e}(\mathbf {R} )} . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface. In the second step of the BO approximation, the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced, and the Schrödinger equation for the nuclear motion [ T n + E e ( R ) ] ϕ ( R ) = E ϕ ( R ) {\displaystyle [T_{\text{n}}+E_{\text{e}}(\mathbf {R} )]\phi (\mathbf {R} )=E\phi (\mathbf {R} )} is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. In accord with the Hellmann–Feynman theorem, the nuclear potential is taken to be an average over electron configurations of the sum of the electron–nuclear and internuclear electric potentials. == Derivation == It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: E 0 ( R ) ≪ E 1 ( R ) ≪ E 2 ( R ) ≪ ⋯ for all R {\displaystyle E_{0}(\mathbf {R} )\ll E_{1}(\mathbf {R} )\ll E_{2}(\mathbf {R} )\ll \cdots {\text{ for all }}\mathbf {R} } . We start from the exact non-relativistic, time-independent molecular Hamiltonian: H = H e + T n {\displaystyle H=H_{\text{e}}+T_{\text{n}}} with H e = − ∑ i 1 2 ∇ i 2 − ∑ i , A Z A r i A + ∑ i > j 1 r i j + ∑ B > A Z A Z B R A B and T n = − ∑ A 1 2 M A ∇ A 2 . {\displaystyle H_{\text{e}}=-\sum _{i}{{\frac {1}{2}}\nabla _{i}^{2}}-\sum _{i,A}{\frac {Z_{A}}{r_{iA}}}+\sum _{i>j}{\frac {1}{r_{ij}}}+\sum _{B>A}{\frac {Z_{A}Z_{B}}{R_{AB}}}\quad {\text{and}}\quad T_{\text{n}}=-\sum _{A}{{\frac {1}{2M_{A}}}\nabla _{A}^{2}}.} The position vectors r ≡ { r i } {\displaystyle \mathbf {r} \equiv \{\mathbf {r} _{i}\}} of the electrons and the position vectors R ≡ { R A = ( R A x , R A y , R A z ) } {\displaystyle \mathbf {R} \equiv \{\mathbf {R} _{A}=(R_{Ax},R_{Ay},R_{Az})\}} of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as r i A ≡ | r i − R A | {\displaystyle r_{iA}\equiv |\mathbf {r} _{i}-\mathbf {R} _{A}|} (distance between electron i and nucleus A) and similar definitions hold for r i j {\displaystyle r_{ij}} and R A B {\displaystyle R_{AB}} . We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the two-body Coulomb interactions among the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see the Planck constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA – the atomic number and mass of nucleus A. It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows: T n = ∑ A ∑ α = x , y , z P A α P A α 2 M A with P A α = − i ∂ ∂ R A α . {\displaystyle T_{\text{n}}=\sum _{A}\sum _{\alpha =x,y,z}{\frac {P_{A\alpha }P_{A\alpha }}{2M_{A}}}\quad {\text{with}}\quad P_{A\alpha }=-i{\frac {\partial }{\partial R_{A\alpha }}}.} Suppose we have K electronic eigenfunctions χ k ( r ; R ) {\displaystyle \chi _{k}(\mathbf {r} ;\mathbf {R} )} of H e {\displaystyle H_{\text{e}}} ; that is, we have solved H e χ k ( r ; R ) = E k ( R ) χ k ( r ; R ) for k = 1 , … , K . {\displaystyle H_{\text{e}}\chi _{k}(\mathbf {r} ;\mathbf {R} )=E_{k}(\mathbf {R} )\chi _{k}(\mathbf {r} ;\mathbf {R} )\quad {\text{for}}\quad k=1,\ldots ,K.} The electronic wave functions χ k {\displaystyle \chi _{k}} will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions χ k {\displaystyle \chi _{k}} on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although χ k {\displaystyle \chi _{k}} is a real-valued function of r {\displaystyle \mathbf {r} } , its functional form depends on R {\displaystyle \mathbf {R} } . For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, χ k {\displaystyle \chi _{k}} is a molecular orbital (MO) given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of R {\displaystyle \mathbf {R} } , the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO χ k {\displaystyle \chi _{k}} . We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider P A α χ k ( r ; R ) = − i ∂ χ k ( r ; R ) ∂ R A α for α = x , y , z , {\displaystyle P_{A\alpha }\chi _{k}(\mathbf {r} ;\mathbf {R} )=-i{\frac {\partial \chi _{k}(\mathbf {r} ;\mathbf {R} )}{\partial R_{A\alpha }}}\quad {\text{for}}\quad \alpha =x,y,z,} which in general will not be zero. The total wave function Ψ ( R , r ) {\displaystyle \Psi (\mathbf {R} ,\mathbf {r} )} is expanded in terms of χ k ( r ; R ) {\displaystyle \chi _{k}(\mathbf {r} ;\mathbf {R} )} : Ψ ( R , r ) = ∑ k = 1 K χ k ( r ; R ) ϕ k ( R ) , {\displaystyle \Psi (\mathbf {R} ,\mathbf {r} )=\sum _{k=1}^{K}\chi _{k}(\mathbf {r} ;\mathbf {R} )\phi _{k}(\mathbf {R} ),} with ⟨ χ k ′ ( r ; R ) | χ k ( r ; R ) ⟩ ( r ) = δ k ′ k , {\displaystyle \langle \chi _{k'}(\mathbf {r} ;\mathbf {R} )|\chi _{k}(\mathbf {r} ;\mathbf {R} )\rangle _{(\mathbf {r} )}=\delta _{k'k},} and where the subscript ( r ) {\displaystyle (\mathbf {r} )} indicates that the integration, implied by the bra–ket notation, is over electronic coordinates only. By definition, the matrix with general element ( H e ( R ) ) k ′ k ≡ ⟨ χ k ′ ( r ; R ) | H e | χ k ( r ; R ) ⟩ ( r ) = δ k ′ k E k ( R ) {\displaystyle {\big (}\mathbb {H} _{\text{e}}(\mathbf {R} ){\big )}_{k'k}\equiv \langle \chi _{k'}(\mathbf {r} ;\mathbf {R} )|H_{\text{e}}|\chi _{k}(\mathbf {r} ;\mathbf {R} )\rangle _{(\mathbf {r} )}=\delta _{k'k}E_{k}(\mathbf {R} )} is diagonal. After multiplication by the real function χ k ′ ( r ; R ) {\displaystyle \chi _{k'}(\mathbf {r} ;\mathbf {R} )} from the left and integration over the electronic coordinates r {\displaystyle \mathbf {r} } the total Schrödinger equation H Ψ ( R , r ) = E Ψ ( R , r ) {\displaystyle H\Psi (\mathbf {R} ,\mathbf {r} )=E\Psi (\mathbf {R} ,\mathbf {r} )} is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only [ H n ( R ) + H e ( R ) ] ϕ ( R ) = E ϕ ( R ) . {\displaystyle [\mathbb {H} _{\text{n}}(\mathbf {R} )+\mathbb {H} _{\text{e}}(\mathbf {R} )]{\boldsymbol {\phi }}(\mathbf {R} )=E{\boldsymbol {\phi }}(\mathbf {R} ).} The column vector ϕ ( R ) {\displaystyle {\boldsymbol {\phi }}(\mathbf {R} )} has elements ϕ k ( R ) , k = 1 , … , K {\displaystyle \phi _{k}(\mathbf {R} ),\ k=1,\ldots ,K} . The matrix H e ( R ) {\displaystyle \mathbb {H} _{\text{e}}(\mathbf {R} )} is diagonal, and the nuclear Hamilton matrix is non-diagonal; its off-diagonal (vibronic coupling) terms ( H n ( R ) ) k ′ k {\displaystyle {\big (}\mathbb {H} _{\text{n}}(\mathbf {R} ){\big )}_{k'k}} are further discussed below. The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born–Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of T n {\displaystyle T_{\text{n}}} as T n ( R ) k ′ k ≡ ( H n ( R ) ) k ′ k = δ k ′ k T n + ∑ A , α 1 M A ⟨ χ k ′ | P A α | χ k ⟩ ( r ) P A α + ⟨ χ k ′ | T n | χ k ⟩ ( r ) . {\displaystyle T_{\text{n}}(\mathbf {R} )_{k'k}\equiv {\big (}\mathbb {H} _{\text{n}}(\mathbf {R} ){\big )}_{k'k}=\delta _{k'k}T_{\text{n}}+\sum _{A,\alpha }{\frac {1}{M_{A}}}\langle \chi _{k'}|P_{A\alpha }|\chi _{k}\rangle _{(\mathbf {r} )}P_{A\alpha }+\langle \chi _{k'}|T_{\text{n}}|\chi _{k}\rangle _{(\mathbf {r} )}.} The diagonal ( k ′ = k {\displaystyle k'=k} ) matrix elements ⟨ χ k | P A α | χ k ⟩ ( r ) {\displaystyle \langle \chi _{k}|P_{A\alpha }|\chi _{k}\rangle _{(\mathbf {r} )}} of the operator P A α {\displaystyle P_{A\alpha }} vanish, because we assume time-reversal invariant, so χ k {\displaystyle \chi _{k}} can be chosen to be always real. The off-diagonal matrix elements satisfy ⟨ χ k ′ | P A α | χ k ⟩ ( r ) = ⟨ χ k ′ | [ P A α , H e ] | χ k ⟩ ( r ) E k ( R ) − E k ′ ( R ) . {\displaystyle \langle \chi _{k'}|P_{A\alpha }|\chi _{k}\rangle _{(\mathbf {r} )}={\frac {\langle \chi _{k'}|[P_{A\alpha },H_{\text{e}}]|\chi _{k}\rangle _{(\mathbf {r} )}}{E_{k}(\mathbf {R} )-E_{k'}(\mathbf {R} )}}.} The matrix element in the numerator is ⟨ χ k ′ | [ P A α , H e ] | χ k ⟩ ( r ) = i Z A ∑ i ⟨ χ k ′ | ( r i A ) α r i A 3 | χ k ⟩ ( r ) with r i A ≡ r i − R A . {\displaystyle \langle \chi _{k'}|[P_{A\alpha },H_{\mathrm {e} }]|\chi _{k}\rangle _{(\mathbf {r} )}=iZ_{A}\sum _{i}\left\langle \chi _{k'}\left|{\frac {(\mathbf {r} _{iA})_{\alpha }}{r_{iA}^{3}}}\right|\chi _{k}\right\rangle _{(\mathbf {r} )}\quad {\text{with}}\quad \mathbf {r} _{iA}\equiv \mathbf {r} _{i}-\mathbf {R} _{A}.} The matrix element of the one-electron operator appearing on the right side is finite. When the two surfaces come close, E k ( R ) ≈ E k ′ ( R ) {\displaystyle E_{k}(\mathbf {R} )\approx E_{k'}(\mathbf {R} )} , the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down, and a coupled set of nuclear motion equations must be considered instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected, and hence the whole matrix of P α A {\displaystyle P_{\alpha }^{A}} is effectively zero. The third term on the right side of the expression for the matrix element of Tn (the Born–Oppenheimer diagonal correction) can approximately be written as the matrix of P α A {\displaystyle P_{\alpha }^{A}} squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well separated surfaces, and a diagonal, uncoupled, set of nuclear motion equations results: [ T n + E k ( R ) ] ϕ k ( R ) = E ϕ k ( R ) for k = 1 , … , K , {\displaystyle [T_{\text{n}}+E_{k}(\mathbf {R} )]\phi _{k}(\mathbf {R} )=E\phi _{k}(\mathbf {R} )\quad {\text{for}}\quad k=1,\ldots ,K,} which are the normal second step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born–Oppenheimer approximation breaks down, and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. == Born–Oppenheimer approximation with correct symmetry == To include the correct symmetry within the Born–Oppenheimer (BO) approximation, a molecular system presented in terms of (mass-dependent) nuclear coordinates q {\displaystyle \mathbf {q} } and formed by the two lowest BO adiabatic potential energy surfaces (PES) u 1 ( q ) {\displaystyle u_{1}(\mathbf {q} )} and u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} is considered. To ensure the validity of the BO approximation, the energy E of the system is assumed to be low enough so that u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} becomes a closed PES in the region of interest, with the exception of sporadic infinitesimal sites surrounding degeneracy points formed by u 1 ( q ) {\displaystyle u_{1}(\mathbf {q} )} and u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} (designated as (1, 2) degeneracy points). The starting point is the nuclear adiabatic BO (matrix) equation written in the form − ℏ 2 2 m ( ∇ + τ ) 2 Ψ + ( u − E ) Ψ = 0 , {\displaystyle -{\frac {\hbar ^{2}}{2m}}(\nabla +\tau )^{2}\Psi +(\mathbf {u} -E)\Psi =0,} where Ψ ( q ) {\displaystyle \Psi (\mathbf {q} )} is a column vector containing the unknown nuclear wave functions ψ k ( q ) {\displaystyle \psi _{k}(\mathbf {q} )} , u ( q ) {\displaystyle \mathbf {u} (\mathbf {q} )} is a diagonal matrix containing the corresponding adiabatic potential energy surfaces u k ( q ) {\displaystyle u_{k}(\mathbf {q} )} , m is the reduced mass of the nuclei, E is the total energy of the system, ∇ {\displaystyle \nabla } is the gradient operator with respect to the nuclear coordinates q {\displaystyle \mathbf {q} } , and τ ( q ) {\displaystyle \mathbf {\tau } (\mathbf {q} )} is a matrix containing the vectorial non-adiabatic coupling terms (NACT): τ j k = ⟨ ζ j | ∇ ζ k ⟩ . {\displaystyle \mathbf {\tau } _{jk}=\langle \zeta _{j}|\nabla \zeta _{k}\rangle .} Here | ζ n ⟩ {\displaystyle |\zeta _{n}\rangle } are eigenfunctions of the electronic Hamiltonian assumed to form a complete Hilbert space in the given region in configuration space. To study the scattering process taking place on the two lowest surfaces, one extracts from the above BO equation the two corresponding equations: − ℏ 2 2 m ∇ 2 ψ 1 + ( u ~ 1 − E ) ψ 1 − ℏ 2 2 m [ 2 τ 12 ∇ + ∇ τ 12 ] ψ 2 = 0 , {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi _{1}+({\tilde {u}}_{1}-E)\psi _{1}-{\frac {\hbar ^{2}}{2m}}[2\mathbf {\tau } _{12}\nabla +\nabla \mathbf {\tau } _{12}]\psi _{2}=0,} − ℏ 2 2 m ∇ 2 ψ 2 + ( u ~ 2 − E ) ψ 2 + ℏ 2 2 m [ 2 τ 12 ∇ + ∇ τ 12 ] ψ 1 = 0 , {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi _{2}+({\tilde {u}}_{2}-E)\psi _{2}+{\frac {\hbar ^{2}}{2m}}[2\mathbf {\tau } _{12}\nabla +\nabla \mathbf {\tau } _{12}]\psi _{1}=0,} where u ~ k ( q ) = u k ( q ) + ( ℏ 2 / 2 m ) τ 12 2 {\displaystyle {\tilde {u}}_{k}(\mathbf {q} )=u_{k}(\mathbf {q} )+(\hbar ^{2}/2m)\tau _{12}^{2}} (k = 1, 2), and τ 12 = τ 12 ( q ) {\displaystyle \mathbf {\tau } _{12}=\mathbf {\tau } _{12}(\mathbf {q} )} is the (vectorial) NACT responsible for the coupling between u 1 ( q ) {\displaystyle u_{1}(\mathbf {q} )} and u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} . Next a new function is introduced: χ = ψ 1 + i ψ 2 , {\displaystyle \chi =\psi _{1}+i\psi _{2},} and the corresponding rearrangements are made: Multiplying the second equation by i and combining it with the first equation yields the (complex) equation − ℏ 2 2 m ∇ 2 χ + ( u ~ 1 − E ) χ + i ℏ 2 2 m [ 2 τ 12 ∇ + ∇ τ 12 ] χ + i ( u 1 − u 2 ) ψ 2 = 0. {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\chi +({\tilde {u}}_{1}-E)\chi +i{\frac {\hbar ^{2}}{2m}}[2\mathbf {\tau } _{12}\nabla +\nabla \mathbf {\tau } _{12}]\chi +i(u_{1}-u_{2})\psi _{2}=0.} The last term in this equation can be deleted for the following reasons: At those points where u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} is classically closed, ψ 2 ( q ) ∼ 0 {\displaystyle \psi _{2}(\mathbf {q} )\sim 0} by definition, and at those points where u 2 ( q ) {\displaystyle u_{2}(\mathbf {q} )} becomes classically allowed (which happens at the vicinity of the (1, 2) degeneracy points) this implies that: u 1 ( q ) ∼ u 2 ( q ) {\displaystyle u_{1}(\mathbf {q} )\sim u_{2}(\mathbf {q} )} , or u 1 ( q ) − u 2 ( q ) ∼ 0 {\displaystyle u_{1}(\mathbf {q} )-u_{2}(\mathbf {q} )\sim 0} . Consequently, the last term is, indeed, negligibly small at every point in the region of interest, and the equation simplifies to become − ℏ 2 2 m ∇ 2 χ + ( u ~ 1 − E ) χ + i ℏ 2 2 m [ 2 τ 12 ∇ + ∇ τ 12 ] χ = 0. {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\chi +({\tilde {u}}_{1}-E)\chi +i{\frac {\hbar ^{2}}{2m}}[2\mathbf {\tau } _{12}\nabla +\nabla \mathbf {\tau } _{12}]\chi =0.} In order for this equation to yield a solution with the correct symmetry, it is suggested to apply a perturbation approach based on an elastic potential u 0 ( q ) {\displaystyle u_{0}(\mathbf {q} )} , which coincides with u 1 ( q ) {\displaystyle u_{1}(\mathbf {q} )} at the asymptotic region. The equation with an elastic potential can be solved, in a straightforward manner, by substitution. Thus, if χ 0 {\displaystyle \chi _{0}} is the solution of this equation, it is presented as χ 0 ( q | Γ ) = ξ 0 ( q ) exp ⁡ [ − i ∫ Γ d q ′ ⋅ τ ( q ′ | Γ ) ] , {\displaystyle \chi _{0}(\mathbf {q} |\Gamma )=\xi _{0}(\mathbf {q} )\exp \left[-i\int _{\Gamma }d\mathbf {q} '\cdot \mathbf {\tau } (\mathbf {q} '|\Gamma )\right],} where Γ {\displaystyle \Gamma } is an arbitrary contour, and the exponential function contains the relevant symmetry as created while moving along Γ {\displaystyle \Gamma } . The function ξ 0 ( q ) {\displaystyle \xi _{0}(\mathbf {q} )} can be shown to be a solution of the (unperturbed/elastic) equation − ℏ 2 2 m ∇ 2 ξ 0 + ( u 0 − E ) ξ 0 = 0. {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\xi _{0}+(u_{0}-E)\xi _{0}=0.} Having χ 0 ( q | Γ ) {\displaystyle \chi _{0}(\mathbf {q} |\Gamma )} , the full solution of the above decoupled equation takes the form χ ( q | Γ ) = χ 0 ( q | Γ ) + η ( q | Γ ) , {\displaystyle \chi (\mathbf {q} |\Gamma )=\chi _{0}(\mathbf {q} |\Gamma )+\eta (\mathbf {q} |\Gamma ),} where η ( q | Γ ) {\displaystyle \eta (\mathbf {q} |\Gamma )} satisfies the resulting inhomogeneous equation: − ℏ 2 2 m ∇ 2 η + ( u ~ 1 − E ) η + i ℏ 2 2 m [ 2 τ 12 ∇ + ∇ τ 12 ] η = ( u 1 − u 0 ) χ 0 . {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\eta +({\tilde {u}}_{1}-E)\eta +i{\frac {\hbar ^{2}}{2m}}[2\mathbf {\tau } _{12}\nabla +\nabla \mathbf {\tau } _{12}]\eta =(u_{1}-u_{0})\chi _{0}.} In this equation the inhomogeneity ensures the symmetry for the perturbed part of the solution along any contour and therefore for the solution in the required region in configuration space. The relevance of the present approach was demonstrated while studying a two-arrangement-channel model (containing one inelastic channel and one reactive channel) for which the two adiabatic states were coupled by a Jahn–Teller conical intersection. A nice fit between the symmetry-preserved single-state treatment and the corresponding two-state treatment was obtained. This applies in particular to the reactive state-to-state probabilities (see Table III in Ref. 5a and Table III in Ref. 5b), for which the ordinary BO approximation led to erroneous results, whereas the symmetry-preserving BO approximation produced the accurate results, as they followed from solving the two coupled equations. == See also == Adiabatic ionization Adiabatic process (quantum mechanics) Avoided crossing Born–Huang approximation Franck–Condon principle Kohn anomaly == Notes == == References == == External links == Resources related to the Born–Oppenheimer approximation: The original article (in German) Translation by S. M. Blinder Another version of the same translation by S. M. Blinder The Born–Oppenheimer approximation, a section from Peter Haynes' doctoral thesis
Wikipedia/Born–Oppenheimer_approximation