text
stringlengths
559
401k
source
stringlengths
13
121
In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers, planar waveguides and hot rubidium vapors and to Bose–Einstein condensates confined to highly anisotropic, cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model. In quantum mechanics, the 1D NLSE is a special case of the classical nonlinear Schrödinger field, which in turn is a classical limit of a quantum Schrödinger field. Conversely, when the classical Schrödinger field is canonically quantized, it becomes a quantum field theory (which is linear, despite the fact that it is called ″quantum nonlinear Schrödinger equation″) that describes bosonic point particles with delta-function interactions — the particles either repel or attract when they are at the same point. In fact, when the number of particles is finite, this quantum field theory is equivalent to the Lieb–Liniger model. Both the quantum and the classical 1D nonlinear Schrödinger equations are integrable. Of special interest is the limit of infinite strength repulsion, in which case the Lieb–Liniger model becomes the Tonks–Girardeau gas (also called the hard-core Bose gas, or impenetrable Bose gas). In this limit, the bosons may, by a change of variables that is a continuum generalization of the Jordan–Wigner transformation, be transformed to a system one-dimensional noninteracting spinless fermions. The nonlinear Schrödinger equation is a simplified 1+1-dimensional form of the Ginzburg–Landau equation introduced in 1950 in their work on superconductivity, and was written down explicitly by R. Y. Chiao, E. Garmire, and C. H. Townes (1964, equation (5)) in their study of optical beams. Multi-dimensional version replaces the second spatial derivative by the Laplacian. In more than one dimension, the equation is not integrable, it allows for a collapse and wave turbulence. == Definition == The nonlinear Schrödinger equation is a nonlinear partial differential equation, applicable to classical and quantum mechanics. === Classical equation === The classical field equation (in dimensionless form) is: for the complex field ψ(x,t). This equation arises from the Hamiltonian H = ∫ d x [ 1 2 | ∂ x ψ | 2 + κ 2 | ψ | 4 ] {\displaystyle H=\int \mathrm {d} x\left[{1 \over 2}|\partial _{x}\psi |^{2}+{\kappa \over 2}|\psi |^{4}\right]} with the Poisson brackets { ψ ( x ) , ψ ( y ) } = { ψ ∗ ( x ) , ψ ∗ ( y ) } = 0 {\displaystyle \{\psi (x),\psi (y)\}=\{\psi ^{*}(x),\psi ^{*}(y)\}=0\,} { ψ ∗ ( x ) , ψ ( y ) } = i δ ( x − y ) . {\displaystyle \{\psi ^{*}(x),\psi (y)\}=i\delta (x-y).\,} Unlike its linear counterpart, it never describes the time evolution of a quantum state. The case with negative κ is called focusing and allows for bright soliton solutions (localized in space, and having spatial attenuation towards infinity) as well as breather solutions. It can be solved exactly by use of the inverse scattering transform, as shown by Zakharov & Shabat (1972) (see below). The other case, with κ positive, is the defocusing NLS which has dark soliton solutions (having constant amplitude at infinity, and a local spatial dip in amplitude). === Quantum mechanics === To get the quantized version, simply replace the Poisson brackets by commutators [ ψ ( x ) , ψ ( y ) ] = [ ψ ∗ ( x ) , ψ ∗ ( y ) ] = 0 [ ψ ∗ ( x ) , ψ ( y ) ] = − δ ( x − y ) {\displaystyle {\begin{aligned}{}[\psi (x),\psi (y)]&=[\psi ^{*}(x),\psi ^{*}(y)]=0\\{}[\psi ^{*}(x),\psi (y)]&=-\delta (x-y)\end{aligned}}} and normal order the Hamiltonian H = ∫ d x [ 1 2 ∂ x ψ † ∂ x ψ + κ 2 ψ † ψ † ψ ψ ] . {\displaystyle H=\int dx\left[{1 \over 2}\partial _{x}\psi ^{\dagger }\partial _{x}\psi +{\kappa \over 2}\psi ^{\dagger }\psi ^{\dagger }\psi \psi \right].} The quantum version was solved by Bethe ansatz by Lieb and Liniger. Thermodynamics was described by Chen-Ning Yang. Quantum correlation functions also were evaluated by Korepin in 1993. The model has higher conservation laws - Davies and Korepin in 1989 expressed them in terms of local fields. == Solution == The nonlinear Schrödinger equation is integrable in 1d: Zakharov and Shabat (1972) solved it with the inverse scattering transform. The corresponding linear system of equations is known as the Zakharov–Shabat system: ϕ x = J ϕ Λ + U ϕ ϕ t = 2 J ϕ Λ 2 + 2 U ϕ Λ + ( J U 2 − J U x ) ϕ , {\displaystyle {\begin{aligned}\phi _{x}&=J\phi \Lambda +U\phi \\\phi _{t}&=2J\phi \Lambda ^{2}+2U\phi \Lambda +\left(JU^{2}-JU_{x}\right)\phi ,\end{aligned}}} where Λ = ( λ 1 0 0 λ 2 ) , J = i σ z = ( i 0 0 − i ) , U = i ( 0 q r 0 ) . {\displaystyle \Lambda ={\begin{pmatrix}\lambda _{1}&0\\0&\lambda _{2}\end{pmatrix}},\quad J=i\sigma _{z}={\begin{pmatrix}i&0\\0&-i\end{pmatrix}},\quad U=i{\begin{pmatrix}0&q\\r&0\end{pmatrix}}.} The nonlinear Schrödinger equation arises as compatibility condition of the Zakharov–Shabat system: ϕ x t = ϕ t x ⇒ U t = − J U x x + 2 J U 2 U ⇔ { i q t = q x x + 2 q r q i r t = − r x x − 2 q r r . {\displaystyle \phi _{xt}=\phi _{tx}\quad \Rightarrow \quad U_{t}=-JU_{xx}+2JU^{2}U\quad \Leftrightarrow \quad {\begin{cases}iq_{t}=q_{xx}+2qrq\\ir_{t}=-r_{xx}-2qrr.\end{cases}}} By setting q = r* or q = − r* the nonlinear Schrödinger equation with attractive or repulsive interaction is obtained. An alternative approach uses the Zakharov–Shabat system directly and employs the following Darboux transformation: ϕ → ϕ [ 1 ] = ϕ Λ − σ ϕ U → U [ 1 ] = U + [ J , σ ] σ = φ Ω φ − 1 {\displaystyle {\begin{aligned}\phi \to \phi [1]&=\phi \Lambda -\sigma \phi \\U\to U[1]&=U+[J,\sigma ]\\\sigma &=\varphi \Omega \varphi ^{-1}\end{aligned}}} which leaves the system invariant. Here, φ is another invertible matrix solution (different from ϕ) of the Zakharov–Shabat system with spectral parameter Ω: φ x = J φ Ω + U φ φ t = 2 J φ Ω 2 + 2 U φ Ω + ( J U 2 − J U x ) φ . {\displaystyle {\begin{aligned}\varphi _{x}&=J\varphi \Omega +U\varphi \\\varphi _{t}&=2J\varphi \Omega ^{2}+2U\varphi \Omega +\left(JU^{2}-JU_{x}\right)\varphi .\end{aligned}}} Starting from the trivial solution U = 0 and iterating, one obtains the solutions with n solitons. This can be achieved via direct numerical simulation using, for example, the split-step method. This method has been implemented on both CPU and GPU. == Applications == === Fiber optics === In optics, the nonlinear Schrödinger equation occurs in the Manakov system, a model of wave propagation in fiber optics. The function ψ represents a wave and the nonlinear Schrödinger equation describes the propagation of the wave through a nonlinear medium. The second-order derivative represents the dispersion, while the κ term represents the nonlinearity. The equation models many nonlinearity effects in a fiber, including but not limited to self-phase modulation, four-wave mixing, second-harmonic generation, stimulated Raman scattering, optical solitons, ultrashort pulses, etc. === Water waves === For water waves, the nonlinear Schrödinger equation describes the evolution of the envelope of modulated wave groups. In a paper in 1968, Vladimir E. Zakharov describes the Hamiltonian structure of water waves. In the same paper Zakharov shows that, for slowly modulated wave groups, the wave amplitude satisfies the nonlinear Schrödinger equation, approximately. The value of the nonlinearity parameter к depends on the relative water depth. For deep water, with the water depth large compared to the wave length of the water waves, к is negative and envelope solitons may occur. Additionally, the group velocity of these envelope solitons could be increased by an acceleration induced by an external time-dependent water flow. For shallow water, with wavelengths longer than 4.6 times the water depth, the nonlinearity parameter к is positive and wave groups with envelope solitons do not exist. In shallow water surface-elevation solitons or waves of translation do exist, but they are not governed by the nonlinear Schrödinger equation. The nonlinear Schrödinger equation is thought to be important for explaining the formation of rogue waves. The complex field ψ, as appearing in the nonlinear Schrödinger equation, is related to the amplitude and phase of the water waves. Consider a slowly modulated carrier wave with water surface elevation η of the form: η = a ( x 0 , t 0 ) cos ⁡ [ k 0 x 0 − ω 0 t 0 − θ ( x 0 , t 0 ) ] , {\displaystyle \eta =a(x_{0},t_{0})\;\cos \left[k_{0}\,x_{0}-\omega _{0}\,t_{0}-\theta (x_{0},t_{0})\right],} where a(x0, t0) and θ(x0, t0) are the slowly modulated amplitude and phase. Further ω0 and k0 are the (constant) angular frequency and wavenumber of the carrier waves, which have to satisfy the dispersion relation ω0 = Ω(k0). Then ψ = a exp ⁡ ( i θ ) . {\displaystyle \psi =a\;\exp \left(i\theta \right).} So its modulus |ψ| is the wave amplitude a, and its argument arg(ψ) is the phase θ. The relation between the physical coordinates (x0, t0) and the (x, t) coordinates, as used in the nonlinear Schrödinger equation given above, is given by: x = k 0 [ x 0 − Ω ′ ( k 0 ) t 0 ] , t = k 0 2 [ − Ω ″ ( k 0 ) ] t 0 {\displaystyle x=k_{0}\left[x_{0}-\Omega '(k_{0})\;t_{0}\right],\quad t=k_{0}^{2}\left[-\Omega ''(k_{0})\right]\;t_{0}} Thus (x, t) is a transformed coordinate system moving with the group velocity Ω'(k0) of the carrier waves, The dispersion-relation curvature Ω"(k0) – representing group velocity dispersion – is always negative for water waves under the action of gravity, for any water depth. For waves on the water surface of deep water, the coefficients of importance for the nonlinear Schrödinger equation are: κ = − 2 k 0 2 , Ω ( k 0 ) = g k 0 = ω 0 {\displaystyle \kappa =-2k_{0}^{2},\quad \Omega (k_{0})={\sqrt {gk_{0}}}=\omega _{0}\,\!} so Ω ′ ( k 0 ) = 1 2 ω 0 k 0 , Ω ″ ( k 0 ) = − 1 4 ω 0 k 0 2 , {\displaystyle \Omega '(k_{0})={\frac {1}{2}}{\frac {\omega _{0}}{k_{0}}},\quad \Omega ''(k_{0})=-{\frac {1}{4}}{\frac {\omega _{0}}{k_{0}^{2}}},\,\!} where g is the acceleration due to gravity at the Earth's surface. In the original (x0, t0) coordinates the nonlinear Schrödinger equation for water waves reads: i ∂ t 0 A + i Ω ′ ( k 0 ) ∂ x 0 A + 1 2 Ω ″ ( k 0 ) ∂ x 0 x 0 A − ν | A | 2 A = 0 , {\displaystyle i\,\partial _{t_{0}}A+i\,\Omega '(k_{0})\,\partial _{x_{0}}A+{\tfrac {1}{2}}\Omega ''(k_{0})\,\partial _{x_{0}x_{0}}A-\nu \,|A|^{2}\,A=0,} with A = ψ ∗ {\displaystyle A=\psi ^{*}} (i.e. the complex conjugate of ψ {\displaystyle \psi } ) and ν = κ k 0 2 Ω ″ ( k 0 ) . {\displaystyle \nu =\kappa \,k_{0}^{2}\,\Omega ''(k_{0}).} So ν = 1 2 ω 0 k 0 2 {\displaystyle \nu ={\tfrac {1}{2}}\omega _{0}k_{0}^{2}} for deep water waves. === Vortices === Hasimoto (1972) showed that the work of da Rios (1906) on vortex filaments is closely related to the nonlinear Schrödinger equation. Subsequently, Salman (2013) used this correspondence to show that breather solutions can also arise for a vortex filament. == Galilean invariance == The nonlinear Schrödinger equation is Galilean invariant in the following sense: Given a solution ψ(x, t) a new solution can be obtained by replacing x with x + vt everywhere in ψ(x, t) and by appending a phase factor of e − i v ( x + v t / 2 ) {\displaystyle e^{-iv(x+vt/2)}\,} : ψ ( x , t ) ↦ ψ [ v ] ( x , t ) = ψ ( x + v t , t ) e − i v ( x + v t / 2 ) . {\displaystyle \psi (x,t)\mapsto \psi _{[v]}(x,t)=\psi (x+vt,t)\;e^{-iv(x+vt/2)}.} == Gauge equivalent counterpart == NLSE (1) is gauge equivalent to the following isotropic Landau-Lifshitz equation (LLE) or Heisenberg ferromagnet equation S → t = S → ∧ S → x x . {\displaystyle {\vec {S}}_{t}={\vec {S}}\wedge {\vec {S}}_{xx}.\qquad } Note that this equation admits several integrable and non-integrable generalizations in 2 + 1 dimensions like the Ishimori equation and so on. == Zero-curvature formulation == The NLSE is equivalent to the curvature of a particular s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} -connection on R 2 {\displaystyle \mathbb {R} ^{2}} being equal to zero. Explicitly, with coordinates ( x , t ) {\displaystyle (x,t)} on R 2 {\displaystyle \mathbb {R} ^{2}} , the connection components A μ {\displaystyle A_{\mu }} are given by A x = ( i λ i φ ∗ i φ − i λ ) {\displaystyle A_{x}={\begin{pmatrix}i\lambda &i\varphi ^{*}\\i\varphi &-i\lambda \end{pmatrix}}} A t = ( 2 i λ 2 − i | φ | 2 2 i λ φ ∗ + φ x ∗ 2 i λ φ − φ x − 2 i λ 2 + i | φ | 2 ) {\displaystyle A_{t}={\begin{pmatrix}2i\lambda ^{2}-i|\varphi |^{2}&2i\lambda \varphi ^{*}+\varphi _{x}^{*}\\2i\lambda \varphi -\varphi _{x}&-2i\lambda ^{2}+i|\varphi |^{2}\end{pmatrix}}} where the σ i {\displaystyle \sigma _{i}} are the Pauli matrices. Then the zero-curvature equation ∂ t A x − ∂ x A t + [ A x , A t ] = 0 {\displaystyle \partial _{t}A_{x}-\partial _{x}A_{t}+[A_{x},A_{t}]=0} is equivalent to the NLSE i φ t + φ x x + 2 | φ | 2 φ = 0 {\displaystyle i\varphi _{t}+\varphi _{xx}+2|\varphi |^{2}\varphi =0} . The zero-curvature equation is so named as it corresponds to the curvature being equal to zero if it is defined F μ ν = [ ∂ μ − A μ , ∂ ν − A ν ] {\displaystyle F_{\mu \nu }=[\partial _{\mu }-A_{\mu },\partial _{\nu }-A_{\nu }]} . The pair of matrices A x {\displaystyle A_{x}} and A t {\displaystyle A_{t}} are also known as a Lax pair for the NLSE, in the sense that the zero-curvature equation recovers the PDE rather than them satisfying Lax's equation. == See also == AKNS system Eckhaus equation Gross–Pitaevskii equation Quartic interaction for a related model in quantum field theory Soliton (optics) Logarithmic Schrödinger equation == References == === Notes === === Other === == External links == "Nonlinear Schrodinger systems". Scholarpedia. Tutorial lecture on Nonlinear Schrodinger Equation (video). Nonlinear Schrodinger Equation with a Cubic Nonlinearity at EqWorld: The World of Mathematical Equations. Nonlinear Schrodinger Equation with a Power-Law Nonlinearity at EqWorld: The World of Mathematical Equations. Nonlinear Schrodinger Equation of General Form at EqWorld: The World of Mathematical Equations. Mathematical aspects of the nonlinear Schrödinger equation at Dispersive Wiki
Wikipedia/Nonlinear_Schrödinger_equation
In plasma physics, the Vlasov equation is a differential equation describing time evolution of the distribution function of collisionless plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. The Vlasov equation, combined with Landau kinetic equation describe collisional plasma. == Difficulties of the standard kinetic approach == First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics: Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma. Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms. Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma. Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates: d d t f ( r , p , t ) = 0 , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f(\mathbf {r} ,\mathbf {p} ,t)=0,} explicitly a PDE: ∂ f ∂ t + d r d t ⋅ ∂ f ∂ r + d p d t ⋅ ∂ f ∂ p = 0 , {\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {p} }}=0,} and adapted it to the case of a plasma, leading to the systems of equations shown below. Here f is a general distribution function of particles with momentum p at coordinates r and given time t. Note that the term d p d t {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}} is the force F acting on the particle. == The Vlasov–Maxwell system of equations (Gaussian units) == Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} for electrons and (positive) plasma ions. The distribution function f α ( r , p , t ) {\displaystyle f_{\alpha }(\mathbf {r} ,\mathbf {p} ,t)} for species α describes the number of particles of the species α having approximately the momentum p {\displaystyle \mathbf {p} } near the position r {\displaystyle \mathbf {r} } at time t. Instead of the Boltzmann equation, the following system of equations was proposed for description of charged components of plasma (electrons and positive ions): ∂ f e ∂ t + v e ⋅ ∇ f e − e ( E + v e c × B ) ⋅ ∂ f e ∂ p = 0 ∂ f i ∂ t + v i ⋅ ∇ f i + Z i e ( E + v i c × B ) ⋅ ∂ f i ∂ p = 0 {\displaystyle {\begin{aligned}{\frac {\partial f_{e}}{\partial t}}+\mathbf {v} _{e}\cdot \nabla f_{e}-\;\;e\left(\mathbf {E} +{\frac {\mathbf {v} _{e}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{e}}{\partial \mathbf {p} }}&=0\\{\frac {\partial f_{i}}{\partial t}}+\mathbf {v} _{i}\cdot \nabla f_{i}+Z_{i}e\left(\mathbf {E} +{\frac {\mathbf {v} _{i}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{i}}{\partial \mathbf {p} }}&=0\end{aligned}}} ∇ × B = 4 π c j + 1 c ∂ E ∂ t , ∇ ⋅ B = 0 , ∇ × E = − 1 c ∂ B ∂ t , ∇ ⋅ E = 4 π ρ , {\displaystyle {\begin{aligned}\nabla \times \mathbf {B} &={\frac {4\pi }{c}}\mathbf {j} +{\frac {1}{c}}{\frac {\partial \mathbf {E} }{\partial t}},&\nabla \cdot \mathbf {B} &=0,\\\nabla \times \mathbf {E} &=-{\frac {1}{c}}{\frac {\partial \mathbf {B} }{\partial t}},&\nabla \cdot \mathbf {E} &=4\pi \rho ,\end{aligned}}} ρ = e ∫ ( Z i f i − f e ) d 3 p , j = e ∫ ( Z i f i v i − f e v e ) d 3 p , v α = p / m α 1 + p 2 / ( m α c ) 2 {\displaystyle {\begin{aligned}\rho &=e\int \left(Z_{i}f_{i}-f_{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\mathbf {j} &=e\int \left(Z_{i}f_{i}\mathbf {v} _{i}-f_{e}\mathbf {v} _{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\mathbf {v} _{\alpha }&={\frac {\mathbf {p} /m_{\alpha }}{\sqrt {1+p^{2}/\left(m_{\alpha }c\right)^{2}}}}\end{aligned}}} Here e is the elementary charge ( e > 0 {\displaystyle e>0} ), c is the speed of light, Zi e is the charge of the ions, mi is the mass of the ion, E ( r , t ) {\displaystyle \mathbf {E} (\mathbf {r} ,t)} and B ( r , t ) {\displaystyle \mathbf {B} (\mathbf {r} ,t)} represent collective self-consistent electromagnetic field created in the point r {\displaystyle \mathbf {r} } at time moment t by all plasma particles. The essential difference of this system of equations from equations for particles in an external electromagnetic field is that the self-consistent electromagnetic field depends in a complex way on the distribution functions of electrons and ions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} . == The Vlasov–Poisson equation == The Vlasov–Poisson equations are an approximation of the Vlasov–Maxwell equations in the non-relativistic zero-magnetic field limit: ∂ f α ∂ t + v α ⋅ ∂ f α ∂ x + q α E m α ⋅ ∂ f α ∂ v = 0 , {\displaystyle {\frac {\partial f_{\alpha }}{\partial t}}+\mathbf {v} _{\alpha }\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {x} }}+{\frac {q_{\alpha }\mathbf {E} }{m_{\alpha }}}\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {v} }}=0,} and Poisson's equation for self-consistent electric field: ∇ 2 ϕ + ρ ε = 0. {\displaystyle \nabla ^{2}\phi +{\frac {\rho }{\varepsilon }}=0.} Here qα is the particle's electric charge, mα is the particle's mass, E ( x , t ) {\displaystyle \mathbf {E} (\mathbf {x} ,t)} is the self-consistent electric field, ϕ ( x , t ) {\displaystyle \phi (\mathbf {x} ,t)} the self-consistent electric potential, ρ is the electric charge density, and ε {\displaystyle \varepsilon } is the electric permitivity. Vlasov–Poisson equations are used to describe various phenomena in plasma, in particular Landau damping and the distributions in a double layer plasma, where they are necessarily strongly non-Maxwellian, and therefore inaccessible to fluid models. == Moment equations == In fluid descriptions of plasmas (see plasma modeling and magnetohydrodynamics (MHD)) one does not consider the velocity distribution. This is achieved by replacing f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} with plasma moments such as number density n, flow velocity u and pressure p. They are named plasma moments because the n-th moment of f {\displaystyle f} can be found by integrating v n f {\displaystyle v^{n}f} over velocity. These variables are only functions of position and time, which means that some information is lost. In multifluid theory, the different particle species are treated as different fluids with different pressures, densities and flow velocities. The equations governing the plasma moments are called the moment or fluid equations. Below the two most used moment equations are presented (in SI units). Deriving the moment equations from the Vlasov equation requires no assumptions about the distribution function. === Continuity equation === The continuity equation describes how the density changes with time. It can be found by integration of the Vlasov equation over the entire velocity space. ∫ d f d t d 3 v = ∫ ( ∂ f ∂ t + ( v ⋅ ∇ r ) f + ( a ⋅ ∇ v ) f ) d 3 v = 0 {\displaystyle \int {\frac {\mathrm {d} f}{\mathrm {d} t}}\mathrm {d} ^{3}v=\int \left({\frac {\partial f}{\partial t}}+(\mathbf {v} \cdot \nabla _{r})f+(\mathbf {a} \cdot \nabla _{v})f\right)\mathrm {d} ^{3}v=0} After some calculations, one ends up with ∂ n ∂ t + ∇ ⋅ ( n u ) = 0. {\displaystyle {\frac {\partial n}{\partial t}}+\nabla \cdot (n\mathbf {u} )=0.} The number density n, and the momentum density nu, are zeroth and first order moments: n = ∫ f d 3 v {\displaystyle n=\int f\,\mathrm {d^{3}} v} n u = ∫ v f d 3 v {\displaystyle n\mathbf {u} =\int \mathbf {v} f\,\mathrm {d} ^{3}v} === Momentum equation === The rate of change of momentum of a particle is given by the Lorentz equation: m d v d t = q ( E + v × B ) {\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )} By using this equation and the Vlasov Equation, the momentum equation for each fluid becomes m n D D t u = − ∇ ⋅ P + q n E + q n u × B , {\displaystyle mn{\frac {\mathrm {D} }{\mathrm {D} t}}\mathbf {u} =-\nabla \cdot {\mathcal {P}}+qn\mathbf {E} +qn\mathbf {u} \times \mathbf {B} ,} where P {\displaystyle {\mathcal {P}}} is the pressure tensor. The material derivative is D D t = ∂ ∂ t + u ⋅ ∇ . {\displaystyle {\frac {\mathrm {D} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla .} The pressure tensor is defined as the particle mass times the covariance matrix of the velocity: p i j = m ∫ ( v i − u i ) ( v j − u j ) f d 3 v . {\displaystyle p_{ij}=m\int (v_{i}-u_{i})(v_{j}-u_{j})f\mathrm {d} ^{3}v.} == The frozen-in approximation == As for ideal MHD, the plasma can be considered as tied to the magnetic field lines when certain conditions are fulfilled. One often says that the magnetic field lines are frozen into the plasma. The frozen-in conditions can be derived from Vlasov equation. We introduce the scales T, L, and V for time, distance and speed respectively. They represent magnitudes of the different parameters which give large changes in f {\displaystyle f} . By large we mean that ∂ f ∂ t T ∼ f | ∂ f ∂ r | L ∼ f | ∂ f ∂ v | V ∼ f . {\displaystyle {\frac {\partial f}{\partial t}}T\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {r} }}\right|L\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {v} }}\right|V\sim f.} We then write t ′ = t T , r ′ = r L , v ′ = v V . {\displaystyle t'={\frac {t}{T}},\quad \mathbf {r} '={\frac {\mathbf {r} }{L}},\quad \mathbf {v} '={\frac {\mathbf {v} }{V}}.} Vlasov equation can now be written 1 T ∂ f ∂ t ′ + V L v ′ ⋅ ∂ f ∂ r ′ + q m V ( E + V v ′ × B ) ⋅ ∂ f ∂ v ′ = 0. {\displaystyle {\frac {1}{T}}{\frac {\partial f}{\partial t'}}+{\frac {V}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+{\frac {q}{mV}}(\mathbf {E} +V\mathbf {v} '\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0.} So far no approximations have been done. To be able to proceed we set V = R ω g {\displaystyle V=R\omega _{g}} , where ω g = q B / m {\displaystyle \omega _{g}=qB/m} is the gyro frequency and R is the gyroradius. By dividing by ωg, we get 1 ω g T ∂ f ∂ t ′ + R L v ′ ⋅ ∂ f ∂ r ′ + ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ = 0 {\displaystyle {\frac {1}{\omega _{g}T}}{\frac {\partial f}{\partial t'}}+{\frac {R}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+\left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0} If 1 / ω g ≪ T {\displaystyle 1/\omega _{g}\ll T} and R ≪ L {\displaystyle R\ll L} , the two first terms will be much less than f {\displaystyle f} since ∂ f / ∂ t ′ ∼ f , v ′ ≲ 1 {\displaystyle \partial f/\partial t'\sim f,v'\lesssim 1} and ∂ f / ∂ r ′ ∼ f {\displaystyle \partial f/\partial \mathbf {r} '\sim f} due to the definitions of T, L, and V above. Since the last term is of the order of f {\displaystyle f} , we can neglect the two first terms and write ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ ≈ 0 ⇒ ( E + v × B ) ⋅ ∂ f ∂ v ≈ 0 {\displaystyle \left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}\approx 0\Rightarrow (\mathbf {E} +\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} }}\approx 0} This equation can be decomposed into a field aligned and a perpendicular part: E ∥ ∂ f ∂ v ∥ + ( E ⊥ + v × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\mathbf {E} _{\perp }+\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} The next step is to write v = v 0 + Δ v {\displaystyle \mathbf {v} =\mathbf {v} _{0}+\Delta \mathbf {v} } , where v 0 × B = − E ⊥ {\displaystyle \mathbf {v} _{0}\times \mathbf {B} =-\mathbf {E} _{\perp }} It will soon be clear why this is done. With this substitution, we get E ∥ ∂ f ∂ v ∥ + ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} If the parallel electric field is small, ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle (\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} This equation means that the distribution is gyrotropic. The mean velocity of a gyrotropic distribution is zero. Hence, v 0 {\displaystyle \mathbf {v} _{0}} is identical with the mean velocity, u, and we have E + u × B ≈ 0 {\displaystyle \mathbf {E} +\mathbf {u} \times \mathbf {B} \approx 0} To summarize, the gyro period and the gyro radius must be much smaller than the typical times and lengths which give large changes in the distribution function. The gyro radius is often estimated by replacing V with the thermal velocity or the Alfvén velocity. In the latter case R is often called the inertial length. The frozen-in conditions must be evaluated for each particle species separately. Because electrons have much smaller gyro period and gyro radius than ions, the frozen-in conditions will more often be satisfied. == See also == Fokker–Planck equation == References == == Further reading == Vlasov, A. A. (1961). "Many-Particle Theory and Its Application to Plasma". New York. Bibcode:1961temc.book.....V.
Wikipedia/Vlasov_equation
In 1927, a year after the publication of the Schrödinger equation, Hartree formulated what are now known as the Hartree equations for atoms, using the concept of self-consistency that Lindsay had introduced in his study of many electron systems in the context of Bohr theory. Hartree assumed that the nucleus together with the electrons formed a spherically symmetric field. The charge distribution of each electron was the solution of the Schrödinger equation for an electron in a potential v ( r ) {\displaystyle v(r)} , derived from the field. Self-consistency required that the final field, computed from the solutions, was self-consistent with the initial field, and he thus called his method the self-consistent field method. == History == In order to solve the equation of an electron in a spherical potential, Hartree first introduced atomic units to eliminate physical constants. Then he converted the Laplacian from Cartesian to spherical coordinates to show that the solution was a product of a radial function P ( r ) / r {\displaystyle P(r)/r} and a spherical harmonic with an angular quantum number ℓ {\displaystyle \ell } , namely ψ = ( 1 / r ) P ( r ) S ℓ ( θ , ϕ ) {\displaystyle \psi =(1/r)P(r)S_{\ell }(\theta ,\phi )} . The equation for the radial function was d 2 P ( r ) d r 2 + { 2 [ E − v ( r ) ] − ℓ ( ℓ + 1 ) r 2 } P ( r ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}P(r)}{\mathrm {d} r^{2}}}+\left\{2[E-v(r)]-{\frac {\ell (\ell +1)}{r^{2}}}\right\}P(r)=0.} == Hartree equation in mathematics == In mathematics, the Hartree equation, named after Douglas Hartree, is i ∂ t u + ∇ 2 u = V ( u ) u {\displaystyle i\,\partial _{t}u+\nabla ^{2}u=V(u)u} in R d + 1 {\displaystyle \mathbb {R} ^{d+1}} where V ( u ) = ± | x | − n ∗ | u | 2 {\displaystyle V(u)=\pm |x|^{-n}*|u|^{2}} and 0 < n < d {\displaystyle 0<n<d} The non-linear Schrödinger equation is in some sense a limiting case. == Hartree product == The wavefunction which describes all of the electrons, Ψ {\displaystyle \Psi } , is almost always too complex to calculate directly. Hartree's original method was to first calculate the solutions to Schrödinger's equation for individual electrons 1, 2, 3, . . . {\displaystyle ...} , p, in the states α , β , γ , . . . , π {\displaystyle \alpha ,\beta ,\gamma ,...,\pi } , which yields individual solutions: ψ α ( x 1 ) , ψ β ( x 2 ) , ψ γ ( x 3 ) , . . . , ψ π ( x p ) {\displaystyle \psi _{\alpha }(\mathbf {x} _{1}),\psi _{\beta }(\mathbf {x} _{2}),\psi _{\gamma }(\mathbf {x} _{3}),...,\psi _{\pi }(\mathbf {x} _{p})} . Since each ψ {\displaystyle \psi } is a solution to the Schrödinger equation by itself, their product should at least approximate a solution. This simple method of combining the wavefunctions of the individual electrons is known as the Hartree product: Ψ ( x 1 , x 2 , x 3 , . . . , x p ) = ψ α ( x 1 ) ψ β ( x 2 ) ψ γ ( x 3 ) . . . ψ π ( x p ) {\displaystyle \Psi (\mathbf {x} _{1},\mathbf {x} _{2},\mathbf {x} _{3},...,\mathbf {x} _{p})=\psi _{\alpha }(\mathbf {x} _{1})\psi _{\beta }(\mathbf {x} _{2})\psi _{\gamma }(\mathbf {x} _{3})...\psi _{\pi }(\mathbf {x} _{p})} This Hartree product gives us the wavefunction of a system (many-particle) as a combination of wavefunctions of the individual particles. It is inherently mean-field (assumes the particles are independent) and is the unsymmetrized version of the Slater determinant ansatz in the Hartree–Fock method. Although it has the advantage of simplicity, the Hartree product is not satisfactory for fermions, such as electrons, because the resulting wave function is not antisymmetric. An antisymmetric wave function can be mathematically described using the Slater determinant. == Derivation == Let's start from a Hamiltonian of one atom with Z electrons. The same method with some modifications can be expanded to a monoatomic crystal using the Born–von Karman boundary condition and to a crystal with a basis. H ^ = − ℏ 2 2 m ∑ i ∇ r i 2 − ∑ i Z e 2 4 π ϵ 0 | r i | + 1 2 ∑ i ≠ j e 2 4 π ϵ 0 | r i − r j | {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\sum _{i}\nabla _{\mathbf {r} _{i}}^{2}-\sum _{i}{\frac {Ze^{2}}{4\pi \epsilon _{0}|\mathbf {r} _{i}|}}+{\frac {1}{2}}\sum _{i\neq j}{\frac {e^{2}}{4\pi \epsilon _{0}|\mathbf {r} _{i}-\mathbf {r} _{j}|}}} The expectation value is given by ⟨ ψ | H ^ | ψ ⟩ = ∫ ψ ∗ ( r 1 , s 1 , . . . , r Z , s Z ) H ^ ψ ( r 1 , s 1 , . . . , r Z , s Z ) ∏ i d r i {\displaystyle \langle \psi |{\hat {H}}|\psi \rangle =\int \psi ^{*}(\mathbf {r} _{1},s_{1},...,\mathbf {r} _{Z},s_{Z}){\hat {H}}\psi (\mathbf {r} _{1},s_{1},...,\mathbf {r} _{Z},s_{Z})\prod _{i}d\mathbf {r} _{i}} Where the s i {\displaystyle s_{i}} are the spins of the different particles. In general we approximate this potential with a mean field which is also unknown and needs to be found together with the eigenfunctions of the problem. We will also neglect all relativistic effects like spin-orbit and spin-spin interactions. == Hartree derivation == At the time of Hartree the full Pauli exclusion principle was not yet invented, it was only clear the exclusion principle in terms of quantum numbers but it was not clear that the wave function of electrons shall be anti-symmetric. If we start from the assumption that the wave functions of each electron are independent we can assume that the total wave function is the product of the single wave functions and that the total charge density at position r {\displaystyle \mathbf {r} } due to all electrons except i is ρ ( r ) = − e ∑ i ≠ j | ϕ n j ( r ) | 2 {\displaystyle \rho (\mathbf {r} )=-e\sum _{i\neq j}|\phi _{n_{j}}(\mathbf {r} )|^{2}} Where we neglected the spin here for simplicity. This charge density creates an extra mean potential: ∇ 2 V ( r ) = − ρ ( r ) ϵ 0 {\displaystyle \nabla ^{2}V(\mathbf {r} )=-{\frac {\rho (\mathbf {r} )}{\epsilon _{0}}}} The solution can be written as the Coulomb integral V ( r ) = 1 4 π ϵ 0 ∫ ρ ( r ′ ) | r − r ′ | d r ′ = − e 4 π ϵ 0 ∑ i ≠ j ∫ | ϕ n j ( r ′ ) | 2 | r − r ′ | d r ′ {\displaystyle V(\mathbf {r} )={\frac {1}{4\pi \epsilon _{0}}}\int {\frac {\rho (\mathbf {r'} )}{|\mathbf {r} -\mathbf {r'} |}}d\mathbf {r'} =-{\frac {e}{4\pi \epsilon _{0}}}\sum _{i\neq j}\int {\frac {|\phi _{n_{j}}(\mathbf {r'} )|^{2}}{|\mathbf {r} -\mathbf {r'} |}}d\mathbf {r'} } If we now consider the electron i this will also satisfy the time independent Schrödinger equation [ − ℏ ∇ 2 2 m − Z e 2 4 π ϵ 0 | r | − e V ( r ) ] ϕ n i = E i ϕ n i {\displaystyle \left[-{\frac {\hbar \nabla ^{2}}{2m}}-{\frac {Ze^{2}}{4\pi \epsilon _{0}|\mathbf {r} |}}-eV(\mathbf {r} )\right]\phi _{n_{i}}=\mathrm {E} _{i}\phi _{n_{i}}} This is interesting on its own because it can be compared with a single particle problem in a continuous medium where the dielectric constant is given by: ε ( r ) = ϵ 0 1 + 4 π ϵ 0 Z e | r | V ( r ) {\displaystyle \varepsilon (\mathbf {r} )={\frac {\epsilon _{0}}{1+{\frac {4\pi \epsilon _{0}}{Ze}}|\mathbf {r} |V(\mathbf {r} )}}} Where V ( r ) < 0 {\displaystyle V(\mathbf {r} )<0} and ε ( r ) > ϵ 0 {\displaystyle \varepsilon (\mathbf {r} )>\epsilon _{0}} Finally, we have the system of Hartree equations [ − ℏ ∇ 2 2 m − Z e 2 4 π ϵ 0 | r | + e 2 4 π ϵ 0 ∑ i ≠ j ∫ | ϕ n j ( r ′ ) | 2 | r − r ′ | d r ′ ] ϕ n i = E i ϕ n i {\displaystyle \left[-{\frac {\hbar \nabla ^{2}}{2m}}-{\frac {Ze^{2}}{4\pi \epsilon _{0}|\mathbf {r} |}}+{\frac {e^{2}}{4\pi \epsilon _{0}}}\sum _{i\neq j}\int {\frac {|\phi _{n_{j}}(\mathbf {r'} )|^{2}}{|\mathbf {r} -\mathbf {r'} |}}d\mathbf {r'} \right]\phi _{n_{i}}=\mathrm {E} _{i}\phi _{n_{i}}} This is a non linear system of integro-differential equations, but it is interesting in a computational setting because we can solve them iteratively. Namely, we start from a set of known eigenfunctions (which in this simplified mono-atomic example can be the ones of the hydrogen atom) and starting initially from the potential V ( r ) = 0 {\displaystyle V(\mathbf {r} )=0} computing at each iteration a new version of the potential from the charge density above and then a new version of the eigen-functions, ideally these iterations converge. From the convergence of the potential we can say that we have a "self consistent" mean field, i.e. a continuous variation from a known potential with known solutions to an averaged mean field potential. In that sense the potential is consistent and not so different from the originally used one as ansatz. == Slater–Gaunt derivation == In 1928 J. C. Slater and J. A. Gaunt independently showed that given the Hartree product approximation: ψ ( r 1 , s 1 , . . . , r Z , s Z ) = ∏ i Z ϕ n i ( r i , s i ) {\displaystyle \psi (\mathbf {r} _{1},s_{1},...,\mathbf {r} _{Z},s_{Z})=\prod _{i}^{Z}\phi _{n_{i}}(\mathbf {r} _{i},s_{i})} They started from the following variational condition δ ( ⟨ ∏ i ϕ n i ( r i , s i ) | H ^ | ∏ i ϕ n i ( r i , s i ) ⟩ − ∑ i ϵ i ⟨ ϕ n i ( r i , s i ) | ϕ n i ( r i , s i ) ⟩ ) = 0 {\displaystyle \delta \left(\langle \prod _{i}\phi _{n_{i}}(\mathbf {r} _{i},s_{i})|{\hat {H}}|\prod _{i}\phi _{n_{i}}(\mathbf {r} _{i},s_{i})\rangle -\sum _{i}\epsilon _{i}\langle \phi _{n_{i}}(\mathbf {r} _{i},s_{i})|\phi _{n_{i}}(\mathbf {r} _{i},s_{i})\rangle \right)=0} where the ϵ i {\displaystyle \epsilon _{i}} are the Lagrange multipliers needed in order to minimize the functional of the mean energy ⟨ ψ | H ^ | ψ ⟩ {\displaystyle \langle \psi |{\hat {H}}|\psi \rangle } . The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From here they managed to derive the Hartree equations. == Fock and Slater determinant approach == In 1930 Fock and Slater independently then used the Slater determinant instead of the Hartree product for the wave function ψ ( r 1 , s 1 , . . . , r Z , s Z ) = 1 Z ! det [ ϕ n 1 ( r 1 , s 1 ) ϕ n 1 ( r 2 , s 2 ) . . . ϕ n 1 ( r Z , s Z ) ϕ n 2 ( r 1 , s 1 ) ϕ n 2 ( r 2 , s 2 ) . . . ϕ n 2 ( r Z , s Z ) . . . . . . . . . . . . ϕ n Z ( r 1 , s 1 ) ϕ n Z ( r 2 , s 2 ) . . . ϕ n Z ( r Z , s Z ) ] {\displaystyle \psi (\mathbf {r} _{1},s_{1},...,\mathbf {r} _{Z},s_{Z})={\frac {1}{\sqrt {Z!}}}\det {\begin{bmatrix}\phi _{n_{1}}(\mathbf {r} _{1},s_{1})&\phi _{n_{1}}(\mathbf {r} _{2},s_{2})&...&\phi _{n_{1}}(\mathbf {r} _{Z},s_{Z})\\\phi _{n_{2}}(\mathbf {r} _{1},s_{1})&\phi _{n_{2}}(\mathbf {r} _{2},s_{2})&...&\phi _{n_{2}}(\mathbf {r} _{Z},s_{Z})\\...&...&...&...\\\phi _{n_{Z}}(\mathbf {r} _{1},s_{1})&\phi _{n_{Z}}(\mathbf {r} _{2},s_{2})&...&\phi _{n_{Z}}(\mathbf {r} _{Z},s_{Z})\end{bmatrix}}} This determinant guarantees the exchange symmetry (i.e. if the two columns are swapped the determinant change sign) and the Pauli principle if two electronic states are identical there are two identical rows and therefore the determinant is zero. They then applied the same variational condition as above δ ( ⟨ ψ ( r i , s i ) | H ^ | ψ ( r i , s i ) ⟩ − ∑ i ϵ i ⟨ ϕ n i ( r i , s i ) | ϕ n i ( r i , s i ) ⟩ ) = 0 {\displaystyle \delta \left(\langle \psi (\mathbf {r} _{i},s_{i})|{\hat {H}}|\psi (\mathbf {r} _{i},s_{i})\rangle -\sum _{i}\epsilon _{i}\langle \phi _{n_{i}}(\mathbf {r} _{i},s_{i})|\phi _{n_{i}}(\mathbf {r} _{i},s_{i})\rangle \right)=0} Where now the ϕ n i {\displaystyle \phi _{n_{i}}} are a generic orthogonal set of eigen-functions ⟨ ϕ n i ( r , s i ) | ϕ n j ( r , s j ) ⟩ = δ i j {\displaystyle \langle \phi _{n_{i}}(\mathbf {r} ,s_{i})|\phi _{n_{j}}(\mathbf {r} ,s_{j})\rangle =\delta _{ij}} from which the wave function is built. The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From this they derived the Hartree–Fock method. == References ==
Wikipedia/Hartree_equation
In physics, force is what, when unopposed, changes the motion of an object. Force is also a dialectal term for a "waterfall". Force or forces may also refer to: == Places == Force, Marche, a municipality in Ascoli Piceno, Italy Forcé, Mayenne, France; a commune Force, Pennsylvania, an unincorporated community in Pennsylvania Fundy Ocean Research Centre for Energy, tidal power test site in Nova Scotia == People == Anna Laura Force (1868-1952), American educator The Force family of American drag racing: John Force (born 1949), family patriarch; father of four daughters, three of whom are or have been racers themselves: Ashley Force Hood (born 1982) Brittany Force (born 1986) Courtney Force (born 1988) == Arts, entertainment, and media == === Fictional entities === Force (comics), a character in the Marvel Comics Iron Man titles Major Force, a fictional character in the DC Comics universe === Films === Force (film series), a series of Indian Hindi-language action-thriller films Force (2011 film), first installment of the series Force 2 (2016), second installment of the series Force (2014 film), an Indian Bengali-language film === Music === Force, the early name of the Swedish band Europe Force (A Certain Ratio album), 1986 Force (Superfly album), 2012 "Force" (Superfly song) "Force" (Alan Walker song), 2015 === Other uses in arts, entertainment, and media === Forcing (magic), a magician's technique sometimes called a "force" Sonic Forces, a video game == Law, policing, and military == Force (law), unlawful violence or lawful compulsion Forces, the armed forces collectively of a nation's military Security forces, the name of the armed forces in some nations The force, the police force of a particular jurisdiction == Mathematics and science == Brute force method, proof by exhaustion in mathematics Fundamental force, an interaction between particles that cannot be explained by other interactions == Sports == === Teams === Cleveland Force (disambiguation), defunct indoor soccer teams based in Northeast Ohio Georgia Force, an Arena Football League team Ipswich Force, an Australian basketball team Kansas City Force, an American women's gridiron football team San Antonio Force, an Arena Football League team that played during the 1992 season Western Force, an Australian rugby union team in the Super 14 === Other === FORCE (Formula One Race Car Engineering), a design and construction company within the Haas Lola Formula One team Force play, a situation in baseball where the runner is compelled to advance to the next base == Other uses == Force (cereal), a wheat flake cereal Coming into force Force Motors, an Indian automotive company Force-feeding, the practice of feeding a human or other animal against their will Fuerza (political party) (English translation: Force), a political party in Guatemala The Force, a power in the Star Wars franchise == See also == Armed forces (disambiguation) Brute force (disambiguation) Elite Force (disambiguation) Energy (disambiguation) Force 1 (disambiguation) Force 10 (disambiguation) Force B (disambiguation) Force Commander (disambiguation) Force field (disambiguation) Force majeure (disambiguation) Force of Nature (disambiguation) Force One (disambiguation) Force XXI (disambiguation) Forcing (disambiguation) Physical force (disambiguation) Special forces (disambiguation) Taskforce (disambiguation) X force (disambiguation) All pages with titles containing Force
Wikipedia/Force_(disambiguation)
Centrifugal force is a fictitious force in Newtonian mechanics (also called an "inertial" or "pseudo" force) that appears to act on all objects when viewed in a rotating frame of reference. It appears to be directed radially away from the axis of rotation of the frame. The magnitude of the centrifugal force F on an object of mass m at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is F = m ω 2 ρ {\textstyle F=m\omega ^{2}\rho } . This fictitious force is often applied to rotating devices, such as centrifuges, centrifugal pumps, centrifugal governors, and centrifugal clutches, and in centrifugal railways, planetary orbits and banked curves, when they are analyzed in a non–inertial reference frame such as a rotating coordinate system. The term has sometimes also been used for the reactive centrifugal force, a real frame-independent Newtonian force that exists as a reaction to a centripetal force in some scenarios. == History == From 1659, the Neo-Latin term vi centrifuga ("centrifugal force") is attested in Christiaan Huygens' notes and letters. Note, that in Latin centrum means "center" and ‑fugus (from fugiō) means "fleeing, avoiding". Thus, centrifugus means "fleeing from the center" in a literal translation. In 1673, in Horologium Oscillatorium, Huygens writes (as translated by Richard J. Blackwell): There is another kind of oscillation in addition to the one we have examined up to this point; namely, a motion in which a suspended weight is moved around through the circumference of a circle. From this we were led to the construction of another clock at about the same time we invented the first one. [...] I originally intended to publish here a lengthy description of these clocks, along with matters pertaining to circular motion and centrifugal force, as it might be called, a subject about which I have more to say than I am able to do at present. But, in order that those interested in these things can sooner enjoy these new and not useless speculations, and in order that their publication not be prevented by some accident, I have decided, contrary to my plan, to add this fifth part [...]. The same year, Isaac Newton received Huygens work via Henry Oldenburg and replied "I pray you return [Mr. Huygens] my humble thanks [...] I am glad we can expect another discourse of the vis centrifuga, which speculation may prove of good use in natural philosophy and astronomy, as well as mechanics". In 1687, in Principia, Newton further develops vis centrifuga ("centrifugal force"). Around this time, the concept is also further evolved by Newton, Gottfried Wilhelm Leibniz, and Robert Hooke. In the late 18th century, the modern conception of the centrifugal force evolved as a "fictitious force" arising in a rotating reference. Centrifugal force has also played a role in debates in classical mechanics about detection of absolute motion. Newton suggested two arguments to answer the question of whether absolute rotation can be detected: the rotating bucket argument, and the rotating spheres argument. According to Newton, in each scenario the centrifugal force would be observed in the object's local frame (the frame where the object is stationary) only if the frame were rotating with respect to absolute space. Around 1883, Mach's principle was proposed where, instead of absolute rotation, the motion of the distant stars relative to the local inertial frame gives rise through some (hypothetical) physical law to the centrifugal force and other inertia effects. Today's view is based upon the idea of an inertial frame of reference, which privileges observers for which the laws of physics take on their simplest form, and in particular, frames that do not use centrifugal forces in their equations of motion in order to describe motions correctly. Around 1914, the analogy between centrifugal force (sometimes used to create artificial gravity) and gravitational forces led to the equivalence principle of general relativity. == Introduction == Centrifugal force is an outward force apparent in a rotating reference frame. It does not exist when a system is described relative to an inertial frame of reference. All measurements of position and velocity must be made relative to some frame of reference. For example, an analysis of the motion of an object in an airliner in flight could be made relative to the airliner, to the surface of the Earth, or even to the Sun. A reference frame that is at rest (or one that moves with no rotation and at constant velocity) relative to the "fixed stars" is generally taken to be an inertial frame. Any system can be analyzed in an inertial frame (and so with no centrifugal force). However, it is often more convenient to describe a rotating system by using a rotating frame—the calculations are simpler, and descriptions more intuitive. When this choice is made, fictitious forces, including the centrifugal force, arise. In a reference frame rotating about an axis through its origin, all objects, regardless of their state of motion, appear to be under the influence of a radially (from the axis of rotation) outward force that is proportional to their mass, to the distance from the axis of rotation of the frame, and to the square of the angular velocity of the frame. This is the centrifugal force. As humans usually experience centrifugal force from within the rotating reference frame, e.g. on a merry-go-round or vehicle, this is much more well-known than centripetal force. Motion relative to a rotating frame results in another fictitious force: the Coriolis force. If the rate of rotation of the frame changes, a third fictitious force (the Euler force) is required. These fictitious forces are necessary for the formulation of correct equations of motion in a rotating reference frame and allow Newton's laws to be used in their normal form in such a frame (with one exception: the fictitious forces do not obey Newton's third law: they have no equal and opposite counterparts). Newton's third law requires the counterparts to exist within the same frame of reference, hence centrifugal and centripetal force, which do not, are not action and reaction (as is sometimes erroneously contended). == Examples == === Vehicle driving round a curve === A common experience that gives rise to the idea of a centrifugal force is encountered by passengers riding in a vehicle, such as a car, that is changing direction. If a car is traveling at a constant speed along a straight road, then a passenger inside is not accelerating and, according to Newton's second law of motion, the net force acting on them is therefore zero (all forces acting on them cancel each other out). If the car enters a curve that bends to the left, the passenger experiences an apparent force that seems to be pulling them towards the right. This is the fictitious centrifugal force. It is needed within the passengers' local frame of reference to explain their sudden tendency to start accelerating to the right relative to the car—a tendency which they must resist by applying a rightward force to the car (for instance, a frictional force against the seat) in order to remain in a fixed position inside. Since they push the seat toward the right, Newton's third law says that the seat pushes them towards the left. The centrifugal force must be included in the passenger's reference frame (in which the passenger remains at rest): it counteracts the leftward force applied to the passenger by the seat, and explains why this otherwise unbalanced force does not cause them to accelerate. However, it would be apparent to a stationary observer watching from an overpass above that the frictional force exerted on the passenger by the seat is not being balanced; it constitutes a net force to the left, causing the passenger to accelerate toward the inside of the curve, as they must in order to keep moving with the car rather than proceeding in a straight line as they otherwise would. Thus the "centrifugal force" they feel is the result of a "centrifugal tendency" caused by inertia. Similar effects are encountered in aeroplanes and roller coasters where the magnitude of the apparent force is often reported in "G's". === Stone on a string === If a stone is whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is applied by the string (gravity acts vertically). There is a net force on the stone in the horizontal plane which acts toward the center. In an inertial frame of reference, were it not for this net force acting on the stone, the stone would travel in a straight line, according to Newton's first law of motion. In order to keep the stone moving in a circular path, a centripetal force, in this case provided by the string, must be continuously applied to the stone. As soon as it is removed (for example if the string breaks) the stone moves in a straight line, as viewed from above. In this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newton's laws of motion. In a frame of reference rotating with the stone around the same axis as the stone, the stone is stationary. However, the force applied by the string is still acting on the stone. If one were to apply Newton's laws in their usual (inertial frame) form, one would conclude that the stone should accelerate in the direction of the net applied force—towards the axis of rotation—which it does not do. The centrifugal force and other fictitious forces must be included along with the real forces in order to apply Newton's laws of motion in the rotating frame. === Earth === The Earth constitutes a rotating reference frame because it rotates once every 23 hours and 56 minutes around its axis. Because the rotation is slow, the fictitious forces it produces are often small, and in everyday situations can generally be neglected. Even in calculations requiring high precision, the centrifugal force is generally not explicitly included, but rather lumped in with the gravitational force: the strength and direction of the local "gravity" at any point on the Earth's surface is actually a combination of gravitational and centrifugal forces. However, the fictitious forces can be of arbitrary size. For example, in an Earth-bound reference system (where the earth is represented as stationary), the fictitious force (the net of Coriolis and centrifugal forces) is enormous and is responsible for the Sun orbiting around the Earth. This is due to the large mass and velocity of the Sun (relative to the Earth). ==== Weight of an object at the poles and on the equator ==== If an object is weighed with a simple spring balance at one of the Earth's poles, there are two forces acting on the object: the Earth's gravity, which acts in a downward direction, and the equal and opposite restoring force in the spring, acting upward. Since the object is stationary and not accelerating, there is no net force acting on the object and the force from the spring is equal in magnitude to the force of gravity on the object. In this case, the balance shows the value of the force of gravity on the object. When the same object is weighed on the equator, the same two real forces act upon the object. However, the object is moving in a circular path as the Earth rotates and therefore experiencing a centripetal acceleration. When considered in an inertial frame (that is to say, one that is not rotating with the Earth), the non-zero acceleration means that force of gravity will not balance with the force from the spring. In order to have a net centripetal force, the magnitude of the restoring force of the spring must be less than the magnitude of force of gravity. This reduced restoring force in the spring is reflected on the scale as less weight — about 0.3% less at the equator than at the poles. In the Earth reference frame (in which the object being weighed is at rest), the object does not appear to be accelerating; however, the two real forces, gravity and the force from the spring, are the same magnitude and do not balance. The centrifugal force must be included to make the sum of the forces be zero to match the apparent lack of acceleration. Note: In fact, the observed weight difference is more — about 0.53%. Earth's gravity is a bit stronger at the poles than at the equator, because the Earth is not a perfect sphere, so an object at the poles is slightly closer to the center of the Earth than one at the equator; this effect combines with the centrifugal force to produce the observed weight difference. == Formulation == For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame. === Time derivatives in a rotating frame === In a rotating frame of reference, the time derivatives of any vector function P of time—such as the velocity and acceleration vectors of an object—will differ from its time derivatives in the stationary frame. If P1, P2, P3 are the components of P with respect to unit vectors i, j, k directed along the axes of the rotating frame (i.e. P = P1 i + P2 j +P3 k), then the first time derivative [⁠dP/dt⁠] of P with respect to the rotating frame is, by definition, ⁠dP1/dt⁠ i + ⁠dP2/dt⁠ j + ⁠dP3/dt⁠ k. If the absolute angular velocity of the rotating frame is ω then the derivative ⁠dP/dt⁠ of P with respect to the stationary frame is related to [⁠dP/dt⁠] by the equation: d P d t = [ d P d t ] + ω × P , {\displaystyle {\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}=\left[{\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {P}}\ ,} where × denotes the vector cross product. In other words, the rate of change of P in the stationary frame is the sum of its apparent rate of change in the rotating frame and a rate of rotation ω × P attributable to the motion of the rotating frame. The vector ω has magnitude ω equal to the rate of rotation and is directed along the axis of rotation according to the right-hand rule. === Acceleration === Newton's law of motion for a particle of mass m written in vector form is: F = m a , {\displaystyle {\boldsymbol {F}}=m{\boldsymbol {a}}\ ,} where F is the vector sum of the physical forces applied to the particle and a is the absolute acceleration (that is, acceleration in an inertial frame) of the particle, given by: a = d 2 r d t 2 , {\displaystyle {\boldsymbol {a}}={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\ ,} where r is the position vector of the particle (not to be confused with radius, as used above.) By applying the transformation above from the stationary to the rotating frame three times (twice to d r d t {\textstyle {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}} and once to d d t [ d r d t ] {\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]} ), the absolute acceleration of the particle can be written as: a = d 2 r d t 2 = d d t d r d t = d d t ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × d r d t = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + d ω d t × r + 2 ω × [ d r d t ] + ω × ( ω × r ) . {\displaystyle {\begin{aligned}{\boldsymbol {a}}&={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}={\frac {\mathrm {d} }{\mathrm {d} t}}\left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times \left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+2{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\ .\end{aligned}}} === Force === The apparent acceleration in the rotating frame is [ d 2 r d t 2 ] {\displaystyle \left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]} . An observer unaware of the rotation would expect this to be zero in the absence of outside forces. However, Newton's laws of motion apply only in the inertial frame and describe dynamics in terms of the absolute acceleration d 2 r d t 2 {\displaystyle {\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}} . Therefore, the observer perceives the extra terms as contributions due to fictitious forces. These terms in the apparent acceleration are independent of mass; so it appears that each of these fictitious forces, like gravity, pulls on an object in proportion to its mass. When these forces are added, the equation of motion has the form: F + ( − m d ω d t × r ) ⏟ Euler + ( − 2 m ω × [ d r d t ] ) ⏟ Coriolis + ( − m ω × ( ω × r ) ) ⏟ centrifugal = m [ d 2 r d t 2 ] . {\displaystyle {\boldsymbol {F}}+\underbrace {\left(-m{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}\right)} _{\text{Euler}}+\underbrace {\left(-2m{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]\right)} _{\text{Coriolis}}+\underbrace {\left(-m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\right)} _{\text{centrifugal}}=m\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]\ .} From the perspective of the rotating frame, the additional force terms are experienced just like the real external forces and contribute to the apparent acceleration. The additional terms on the force side of the equation can be recognized as, reading from left to right, the Euler force − m d ω / d t × r {\displaystyle -m\mathrm {d} {\boldsymbol {\omega }}/\mathrm {d} t\times {\boldsymbol {r}}} , the Coriolis force − 2 m ω × [ d r / d t ] {\displaystyle -2m{\boldsymbol {\omega }}\times \left[\mathrm {d} {\boldsymbol {r}}/\mathrm {d} t\right]} , and the centrifugal force − m ω × ( ω × r ) {\displaystyle -m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})} , respectively. Unlike the other two fictitious forces, the centrifugal force always points radially outward from the axis of rotation of the rotating frame, with magnitude m ω 2 r ⊥ {\displaystyle m\omega ^{2}r_{\perp }} , where r ⊥ {\displaystyle r_{\perp }} is the component of the position vector perpendicular to ω {\displaystyle {\boldsymbol {\omega }}} , and unlike the Coriolis force in particular, it is independent of the motion of the particle in the rotating frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\displaystyle ({\boldsymbol {\omega }}=0)} the centrifugal force and all other fictitious forces disappear. Similarly, as the centrifugal force is proportional to the distance from object to the axis of rotation of the frame, the centrifugal force vanishes for objects that lie upon the axis. === Potential === The centrifugal force per unit mass can also be derived as the gradient of a centrifugal potential. For example, the centrifugal potential at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is 0.5 ω 2 ρ 2 {\textstyle 0.5\omega ^{2}\rho ^{2}} (see also: Geopotential#Centrifugal potential.) == Absolute rotation == Three scenarios were suggested by Newton to answer the question of whether the absolute rotation of a local frame can be detected; that is, if an observer can decide whether an observed object is rotating or if the observer is rotating. The shape of the surface of water rotating in a bucket. The shape of the surface becomes concave to balance the centrifugal force against the other forces upon the liquid. The tension in a string joining two spheres rotating about their center of mass. The tension in the string will be proportional to the centrifugal force on each sphere as it rotates around the common center of mass. In these scenarios, the effects attributed to centrifugal force are only observed in the local frame (the frame in which the object is stationary) if the object is undergoing absolute rotation relative to an inertial frame. By contrast, in an inertial frame, the observed effects arise as a consequence of the inertia and the known forces without the need to introduce a centrifugal force. Based on this argument, the privileged frame, wherein the laws of physics take on the simplest form, is a stationary frame in which no fictitious forces need to be invoked. Within this view of physics, any other phenomenon that is usually attributed to centrifugal force can be used to identify absolute rotation. For example, the oblateness of a sphere of freely flowing material is often explained in terms of centrifugal force. The oblate spheroid shape reflects, following Clairaut's theorem, the balance between containment by gravitational attraction and dispersal by centrifugal force. That the Earth is itself an oblate spheroid, bulging at the equator where the radial distance and hence the centrifugal force is larger, is taken as one of the evidences for its absolute rotation. == Applications == The operations of numerous common rotating mechanical systems are most easily conceptualized in terms of centrifugal force. For example: A centrifugal governor regulates the speed of an engine by using spinning masses that move radially, adjusting the throttle, as the engine changes speed. In the reference frame of the spinning masses, centrifugal force causes the radial movement. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device but automatically and smoothly engages the drive as the engine speed rises. Inertial drum brake ascenders used in rock climbing and the inertia reels used in many automobile seat belts operate on the same principle. Centrifugal forces can be used to generate artificial gravity, as in proposed designs for rotating space stations. The Mars Gravity Biosatellite would have studied the effects of Mars-level gravity on mice with gravity simulated in this way. Spin casting and centrifugal casting are production methods that use centrifugal force to disperse liquid metal or plastic throughout the negative space of a mold. Centrifuges are used in science and industry to separate substances. In the reference frame spinning with the centrifuge, the centrifugal force induces a hydrostatic pressure gradient in fluid-filled tubes oriented perpendicular to the axis of rotation, giving rise to large buoyant forces which push low-density particles inward. Elements or particles denser than the fluid move outward under the influence of the centrifugal force. This is effectively Archimedes' principle as generated by centrifugal force as opposed to being generated by gravity. Some amusement rides make use of centrifugal forces. For instance, a Gravitron's spin forces riders against a wall and allows riders to be elevated above the machine's floor in defiance of Earth's gravity. Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in a stationary frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system. == Other uses of the term == While the majority of the scientific literature uses the term centrifugal force to refer to the particular fictitious force that arises in rotating frames, there are a few limited instances in the literature of the term applied to other distinct physical concepts. === In Lagrangian mechanics === One of these instances occurs in Lagrangian mechanics. Lagrangian mechanics formulates mechanics in terms of generalized coordinates {qk}, which can be as simple as the usual polar coordinates ( r , θ ) {\displaystyle (r,\ \theta )} or a much more extensive list of variables. Within this formulation the motion is described in terms of generalized forces, using in place of Newton's laws the Euler–Lagrange equations. Among the generalized forces, those involving the square of the time derivatives {(dqk  ⁄ dt )2} are sometimes called centrifugal forces. In the case of motion in a central potential the Lagrangian centrifugal force has the same form as the fictitious centrifugal force derived in a co-rotating frame. However, the Lagrangian use of "centrifugal force" in other, more general cases has only a limited connection to the Newtonian definition. === As a reactive force === In another instance the term refers to the reaction force to a centripetal force, or reactive centrifugal force. A body undergoing curved motion, such as circular motion, is accelerating toward a center at any particular point in time. This centripetal acceleration is provided by a centripetal force, which is exerted on the body in curved motion by some other body. In accordance with Newton's third law of motion, the body in curved motion exerts an equal and opposite force on the other body. This reactive force is exerted by the body in curved motion on the other body that provides the centripetal force and its direction is from that other body toward the body in curved motion. This reaction force is sometimes described as a centrifugal inertial reaction, that is, a force that is centrifugally directed, which is a reactive force equal and opposite to the centripetal force that is curving the path of the mass. The concept of the reactive centrifugal force is sometimes used in mechanics and engineering. It is sometimes referred to as just centrifugal force rather than as reactive centrifugal force although this usage is deprecated in elementary mechanics. == See also == Balancing of rotating masses Centrifugal mechanism of acceleration Equivalence principle Folk physics Lagrangian point Lamm equation == Notes == == References == == External links == Media related to Centrifugal force at Wikimedia Commons
Wikipedia/Centrifugal_force_(rotating_reference_frame)
In physics, dynamics or classical dynamics is the study of forces and their effect on motion. It is a branch of classical mechanics, along with statics and kinematics. The fundamental principle of dynamics is linked to Newton's second law. == Subdivisions == === Rigid bodies === === Fluids === == Applications == Classical dynamics finds many applications: Aerodynamics, the study of the motion of air Brownian dynamics, the occurrence of Langevin dynamics in the motion of particles in solution File dynamics, stochastic motion of particles in a channel Flight dynamics, the science of aircraft and spacecraft design Molecular dynamics, the study of motion on the molecular level Langevin dynamics, a mathematical model for stochastic dynamics Orbital dynamics, the study of the motion of rockets and spacecraft Stellar dynamics, a description of the collective motion of stars Vehicle dynamics, the study of vehicles in motion == Generalizations == Non-classical dynamics include: System dynamics, the study of the behavior of complex systems Quantum dynamics analogue of classical dynamics in a quantum physics context Quantum chromodynamics, a theory of the strong interaction (color force) Quantum electrodynamics, a description of how matter and light interact Relativistic dynamics, a combination of relativistic and quantum concepts Thermodynamics, the study of the relationships between heat and mechanical energy == See also == Analytical dynamics Ballistics Contact dynamics Dynamical simulation Kinetics (physics) Multibody dynamics n-body problem == References ==
Wikipedia/Dynamics_(physics)
A contact force is any force that occurs because of two objects making contact with each other. Contact forces are very common and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car or kicking a ball are everyday examples where contact forces are at work. In the first case the force is continuously applied to the car by a person, while in the second case the force is delivered in a short impulse. Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force. Not all forces are contact forces; for example, the weight of an object is the force between the object and the Earth, even though the two do not need to make contact. Gravitational forces, electrical forces and magnetic forces are body forces and can exist without contact occurring. == Origin of contact forces == The microscopic origin of contact forces is diverse. Normal force is directly a result of Pauli exclusion principle and not a true force per se: Everyday objects do not actually touch each other; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: Cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces. As for friction, it is a result of both microscopic adhesion and chemical bond formation due to the electromagnetic force, and of microscopic structures stressing into each other; in the latter phenomena, in order to allow motion, the microscopic structures must either slide one above the other, or must acquire enough energy to break one another. Thus the force acting against motion is a combination of the normal force and of the force required to widen microscopic cracks within matter; the latter force is again due to electromagnetic interaction. Additionally, strain is created inside matter, and this strain is due to a combination of electromagnetic interactions (as electrons are attracted to nuclei and repelled from each other) and of Pauli exclusion principle, the latter working similarly to the case of normal force. == See also == Non-contact force Body force Surface force Action at a distance (physics) Spring force == References ==
Wikipedia/Contact_force
The pound of force or pound-force (symbol: lbf, sometimes lbf,) is a unit of force used in some systems of measurement, including English Engineering units and the foot–pound–second system. Pound-force should not be confused with pound-mass (lb), often simply called "pound", which is a unit of mass; nor should these be confused with foot-pound (ft⋅lbf), a unit of energy, or pound-foot (lbf⋅ft), a unit of torque. == Definitions == The pound-force is equal to the gravitational force exerted on a mass of one avoirdupois pound on the surface of Earth. Since the 18th century, the unit has been used in low-precision measurements, for which small changes in Earth's gravity (which varies from equator to pole by up to half a percent) can safely be neglected. The 20th century, however, brought the need for a more precise definition, requiring a standardized value for acceleration due to gravity. === Product of avoirdupois pound and standard gravity === The pound-force is the product of one avoirdupois pound (exactly 0.45359237 kg) and the standard acceleration due to gravity, approximately 32.174049 ft/s2 (9.80665 m/s2). The standard values of acceleration of the standard gravitational field (gn) and the international avoirdupois pound (lb) result in a pound-force equal to 32.174049 ⁠ft⋅lb/s2⁠ (4.4482216152605 N). 1 lbf = 1 lb × g n = 1 lb × 9.80665 m s 2 / 0.3048 m ft ≈ 1 lb × 32.174049 f t s 2 ≈ 32.174049 f t ⋅ l b s 2 1 lbf = 1 lb × 0.45359237 kg lb × g n = 0.45359237 kg × 9.80665 m s 2 = 4.4482216152605 N {\displaystyle {\begin{aligned}1\,{\text{lbf}}&=1\,{\text{lb}}\times g_{\text{n}}\\&=1\,{\text{lb}}\times 9.80665\,{\tfrac {\text{m}}{{\text{s}}^{2}}}/0.3048\,{\tfrac {\text{m}}{\text{ft}}}\\&\approx 1\,{\text{lb}}\times 32.174049\,\mathrm {\tfrac {ft}{s^{2}}} \\&\approx 32.174049\,\mathrm {\tfrac {ft{\cdot }lb}{s^{2}}} \\1\,{\text{lbf}}&=1\,{\text{lb}}\times 0.45359237\,{\tfrac {\text{kg}}{\text{lb}}}\times g_{\text{n}}\\&=0.45359237\,{\text{kg}}\times 9.80665\,{\tfrac {\text{m}}{{\text{s}}^{2}}}\\&=4.4482216152605\,{\text{N}}\end{aligned}}} This definition can be rephrased in terms of the slug. A slug has a mass of 32.174049 lb. A pound-force is the amount of force required to accelerate a slug at a rate of 1 ft/s2, so: 1 lbf = 1 slug × 1 ft s 2 = 1 slug ⋅ ft s 2 {\displaystyle {\begin{aligned}1\,{\text{lbf}}&=1\,{\text{slug}}\times 1\,{\tfrac {\text{ft}}{{\text{s}}^{2}}}\\&=1\,{\tfrac {{\text{slug}}\cdot {\text{ft}}}{{\text{s}}^{2}}}\end{aligned}}} == Conversion to other units == == Foot–pound–second (FPS) systems of units == In some contexts, the term "pound" is used almost exclusively to refer to the unit of force and not the unit of mass. In those applications, the preferred unit of mass is the slug, i.e. lbf⋅s2/ft. In other contexts, the unit "pound" refers to a unit of mass. The international standard symbol for the pound as a unit of mass is lb. In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's second law, F = m ⋅ a, requires another factor, gc, usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s2). "Absolute" systems are coherent systems of units: by using the slug as the unit of mass, the "gravitational" FPS system (left column) avoids the need for such a constant. The SI is an "absolute" metric system with kilogram and meter as base units. == Pound of thrust == The term pound of thrust is an alternative name for pound-force in specific contexts. It is frequently seen in US sources on jet engines and rocketry, some of which continue to use the FPS notation. For example, the thrust produced by each of the Space Shuttle's two Solid Rocket Boosters was 3,300,000 pounds-force (14.7 MN), together 6,600,000 pounds-force (29.4 MN). == See also == == Notes and references == == General sources == Obert, Edward F. (1948). Thermodynamics. New York: D. J. Leggett Book Company. Chapter I "Survey of Dimensions and Units", pp. 1-24.
Wikipedia/Pound_(force)
A force gauge (also called a force meter) is a measuring instrument used to measure forces. Applications exist in research and development, laboratory, quality, production and field environment. There are two kinds of force gauges today: mechanical and digital force gauges. Force Gauges usually measure pressure in stress increments and other dependent human factors. == Mechanical force gauges == A common mechanical force scale, known as the spring scale, features a hook and a spring that attach to an object and measure the amount of force exerted on the spring in order to extend it. == Electrical gauge == An example of an electrical force gauge is an "electronic scale". One or more electrical load cells (commonly referred to as "weigh bars") are used to support a vertical or horizontal "live load" and are solid-state potentiometers which have variable internal resistance proportional to the load they are subjected to and deflected by. As the load and deflection increase, the internal current path circuit which the "supply voltage" from the "scale head" control/display unit must travel increases in length and resistance. At "no load" the resistance and resulting voltage drop are near "zero" and the "signal voltage" returning from the cell to the "scale head" is at or near "supply voltage" sent to it. As load is added and deflection increases, the internal conductor is "stretched" creating a longer, thinner current path with increasing internal resistance. The "signal voltage" is reduced as a result and the "scale head" - whether "analog" or "digital" - will show the increase in load as an increase in weight. Multiple weigh bars are always required and can be used "between" a load and a "cart" under it to make a "portable scale" or the bars can be used as actual "axles" to support a wheeled or tracked cargo wagon/trailer/cart as well as to measure "tongue weight" so even if part of the load is supported by a "tow vehicle" an accurate measurement and record of cargo loaded onto and/or off of the "mobile scale" is possible. The same type of "weigh bar" can be used to measure horizontal loads and "drawbar pull" of wheeled/tracked or vehicles or "bollard pull" of boats or the "thrust" of jet engines when a proper "test rig" is designed and constructed to provide "frictionless" fore-aft movement of the load relative to the weigh bars. So-called "strain gauges" which are also electrical "load cells" but which have internal mechanical components and/or combine the "scale head" and/or "power supply" into one unit and permit the use of relatively common, inexpensive and easily "serviced" vertical weigh bars and in a horizontal load situation in a "compact" and "cheap" alternative to the "frictionless" multi-cell custom-made "test rig" as well as those used in/on modern crane "lift computers" are often used as and referred to as "load cells" when in fact in every case the actual "load cell" is in and of itself "useless" without a "scale head" and properly engineered, designed and constructed "test rig" which allows it to convert "live loads" and a supply or "reference" voltage to varying output signal voltages as its "strained". Load cells do NOT internally "generate" or otherwise "create" electrical "signals" are no "piezo-electric" devices and do not do anything but deflect and create varying voltage "signals" based upon electrical current supplied to them whether by a "display" or the scale head in actual operation or an analog volt-ohmmeter or digital multimeter when "bench tested" or otherwised demonstrated "operating" but not "in operation". == See also == Drawbar force gauge Dynamometer Stretch sensor Weighing scale == References ==
Wikipedia/Force_gauge
In physics, a force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. In mechanics, force makes ideas like 'pushing' or 'pulling' mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F. Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In the case of multiple forces, if the net force on an extended body is zero the body is in equilibrium. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. == Development of the concept == Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.: 2–10 : 79  High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. == Pre-Newtonian concepts == Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force). == Newtonian mechanics == Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. === First law === Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.: 1–7  === Second law === According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.: 204–207  A modern statement of Newton's second law is a vector equation: F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} where p {\displaystyle \mathbf {p} } is the momentum of the system, and F {\displaystyle \mathbf {F} } is the net (vector sum) force.: 399  If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, F = d p d t = d ( m v ) d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}={\frac {\mathrm {d} \left(m\mathbf {v} \right)}{\mathrm {d} t}},} where m is the mass and v {\displaystyle \mathbf {v} } is the velocity.: 9-1,9-2  If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes F = m d v d t . {\displaystyle \mathbf {F} =m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}.} By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} .} === Third law === Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} is the force of body 1 on body 2 and F 2 , 1 {\displaystyle \mathbf {F} _{2,1}} that of body 2 on body 1, then F 1 , 2 = − F 2 , 1 . {\displaystyle \mathbf {F} _{1,2}=-\mathbf {F} _{2,1}.} This law is sometimes referred to as the action-reaction law, with F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} called the action and − F 2 , 1 {\displaystyle -\mathbf {F} _{2,1}} the reaction. Newton's third law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: F 1 , 2 + F 2 , 1 = 0. {\displaystyle \mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.: 19-1  Combining Newton's second and third laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p 1 {\displaystyle \mathbf {p} _{1}} is the momentum of object 1 and p 2 {\displaystyle \mathbf {p} _{2}} the momentum of object 2, then d p 1 d t + d p 2 d t = F 1 , 2 + F 2 , 1 = 0. {\displaystyle {\frac {\mathrm {d} \mathbf {p} _{1}}{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {p} _{2}}{\mathrm {d} t}}=\mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.: ch.12  === Defining "force" === Some textbooks use Newton's second law as a definition of force. However, for the equation F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } for a constant mass m {\displaystyle m} to then have any predictive content, it must be combined with further information.: 12-1  Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference.: 59  The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways,: vii  which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. == Combining forces == Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.: 197  Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.: ch.12  Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.: ch.12  === Equilibrium === When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.: 566  Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.: 566  ==== Static ==== Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his three laws of motion.: ch.12  ==== Dynamic ==== Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.: ch.12  == Examples of forces in classical mechanics == Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. === Gravitational force or Gravity === What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g {\displaystyle \mathbf {g} } and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\displaystyle m} will experience a force: F = m g . {\displaystyle \mathbf {F} =m\mathbf {g} .} For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.: ch.12  Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\displaystyle m_{\oplus }} ) and the radius ( R ⊕ {\displaystyle R_{\oplus }} ) of the Earth to the gravitational acceleration: g = − G m ⊕ R ⊕ 2 r ^ , {\displaystyle \mathbf {g} =-{\frac {Gm_{\oplus }}{{R_{\oplus }}^{2}}}{\hat {\mathbf {r} }},} where the vector direction is given by r ^ {\displaystyle {\hat {\mathbf {r} }}} , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant G {\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\displaystyle m_{2}} is F = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where r {\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the Solar System until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. === Electromagnetic === The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges.: 519  The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.: 4-6–4-8  Thus the electric field anywhere in space is defined as E = F q , {\displaystyle \mathbf {E} ={\mathbf {F} \over {q}},} where q {\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\displaystyle q} due to electric and magnetic fields: F = q ( E + v × B ) , {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} where F {\displaystyle \mathbf {F} } is the electromagnetic force, E {\displaystyle \mathbf {E} } is the electric field at the body's location, B {\displaystyle \mathbf {B} } is the magnetic field, and v {\displaystyle \mathbf {v} } is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.: 482  The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. === Normal === When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects.: 264  The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.: ch.12  === Friction === Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.: 267  The static friction force ( F s f {\displaystyle \mathbf {F} _{\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\displaystyle \mu _{\mathrm {sf} }} ) multiplied by the normal force ( F N {\displaystyle \mathbf {F} _{\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality: 0 ≤ F s f ≤ μ s f F N . {\displaystyle 0\leq \mathbf {F} _{\mathrm {sf} }\leq \mu _{\mathrm {sf} }\mathbf {F} _{\mathrm {N} }.} The kinetic friction force ( F k f {\displaystyle F_{\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: F k f = μ k f F N , {\displaystyle \mathbf {F} _{\mathrm {kf} }=\mu _{\mathrm {kf} }\mathbf {F} _{\mathrm {N} },} where μ k f {\displaystyle \mu _{\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.: 267–271  === Tension === Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.: ch.12  === Spring === A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\displaystyle \Delta x} is the displacement, the force exerted by an ideal spring equals: F = − k Δ x , {\displaystyle \mathbf {F} =-k\Delta \mathbf {x} ,} where k {\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.: ch.12  === Centripetal === For an object in uniform circular motion, the net force acting on the object equals: F = − m v 2 r r ^ , {\displaystyle \mathbf {F} =-{\frac {mv^{2}}{r}}{\hat {\mathbf {r} }},} where m {\displaystyle m} is the mass of the object, v {\displaystyle v} is the velocity of the object and r {\displaystyle r} is the distance to the center of the circular path and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.: ch.12  === Continuum mechanics === Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: F V = − ∇ P , {\displaystyle {\frac {\mathbf {F} }{V}}=-\mathbf {\nabla } P,} where V {\displaystyle V} is the volume of the object in the fluid and P {\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.: ch.12  A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: F d = − b v , {\displaystyle \mathbf {F} _{\mathrm {d} }=-b\mathbf {v} ,} where: b {\displaystyle b} is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and v {\displaystyle \mathbf {v} } is the velocity of the object.: ch.12  More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as σ = F A , {\displaystyle \sigma ={\frac {F}{A}},} where A {\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.: 133–134 : 38-1–38-11  === Fictitious === There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.: ch.12  Because these forces are not genuine they are also referred to as "pseudo forces".: 12-11  In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. == Concepts derived from force == === Rotation and torque === Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F {\displaystyle \mathbf {F} } is defined relative to an arbitrary reference point as the cross product: τ = r × F , {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,} where r {\displaystyle \mathbf {r} } is the position vector of the force application point relative to the reference point.: 497  Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: τ = I α , {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},} where I {\displaystyle I} is the moment of inertia of the body α {\displaystyle {\boldsymbol {\alpha }}} is the angular acceleration of the body.: 502  This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.: 96–113  Equivalently, the differential form of Newton's second law provides an alternative definition of torque: τ = d L d t , {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {dt} }},} where L {\displaystyle \mathbf {L} } is the angular momentum of the particle. Newton's third law of motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. === Yank === The yank is defined as the rate of change of force: 131  Y = d F d t {\displaystyle \mathbf {Y} ={\frac {\mathrm {d} \mathbf {F} }{\mathrm {d} t}}} The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. === Kinematic integrals === Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}{\mathbf {F} \,\mathrm {d} t},} which by Newton's second law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:: 13-3  W = ∫ x 1 x 2 F ⋅ d x , {\displaystyle W=\int _{\mathbf {x} _{1}}^{\mathbf {x} _{2}}{\mathbf {F} \cdot {\mathrm {d} \mathbf {x} }},} which is equivalent to changes in kinetic energy (yielding the work energy theorem).: 13-3  Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x {\displaystyle d\mathbf {x} } in a time interval dt:: 13-2  d W = d W d x ⋅ d x = F ⋅ d x , {\displaystyle \mathrm {d} W={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot \mathrm {d} \mathbf {x} =\mathbf {F} \cdot \mathrm {d} \mathbf {x} ,} so P = d W d t = d W d x ⋅ d x d t = F ⋅ v , {\displaystyle P={\frac {\mathrm {d} W}{\mathrm {d} t}}={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}}=\mathbf {F} \cdot \mathbf {v} ,} with v = d x / d t {\displaystyle \mathbf {v} =\mathrm {d} \mathbf {x} /\mathrm {d} t} the velocity. === Potential energy === Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r ) {\displaystyle U(\mathbf {r} )} is defined as that field whose gradient is equal and opposite to the force produced at every point: F = − ∇ U . {\displaystyle \mathbf {F} =-\mathbf {\nabla } U.} Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.: ch.12  === Conservation === A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.: ch.12  Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r {\displaystyle \mathbf {r} } emanating from spherically symmetric potentials. Examples of this follow: For gravity: F g = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{g}}=-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where G {\displaystyle G} is the gravitational constant, and m n {\displaystyle m_{n}} is the mass of object n. For electrostatic forces: F e = q 1 q 2 4 π ε 0 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{e}}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}r^{2}}}{\hat {\mathbf {r} }},} where ε 0 {\displaystyle \varepsilon _{0}} is electric permittivity of free space, and q n {\displaystyle q_{n}} is the electric charge of object n. For spring forces: F s = − k r r ^ , {\displaystyle \mathbf {F} _{\text{s}}=-kr{\hat {\mathbf {r} }},} where k {\displaystyle k} is the spring constant.: ch.12  For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.: ch.12  The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.: ch.12  == Units == The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. See also Ton-force. == Revisions of the force concept == At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly. === Special theory of relativity === In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's second law, F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} remains valid because it is a mathematical definition.: 855–876  But for momentum to be conserved at relativistic relative velocity, v {\displaystyle v} , momentum must be redefined as: p = m 0 v 1 − v 2 / c 2 , {\displaystyle \mathbf {p} ={\frac {m_{0}\mathbf {v} }{\sqrt {1-v^{2}/c^{2}}}},} where m 0 {\displaystyle m_{0}} is the rest mass and c {\displaystyle c} the speed of light. The expression relating force and acceleration for a particle with constant non-zero rest mass m {\displaystyle m} moving in the x {\displaystyle x} direction at velocity v {\displaystyle v} is:: 216  F = ( γ 3 m a x , γ m a y , γ m a z ) , {\displaystyle \mathbf {F} =\left(\gamma ^{3}ma_{x},\gamma ma_{y},\gamma ma_{z}\right),} where γ = 1 1 − v 2 / c 2 . {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}.} is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\displaystyle c} .: 26 : §15–8  If v {\displaystyle v} is very small compared to c {\displaystyle c} , then γ {\displaystyle \gamma } is very close to 1 and F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } is a close approximation. Even for use in relativity, one can restore the form of F μ = m A μ {\displaystyle F^{\mu }=mA^{\mu }} through the use of four-vectors. This relation is correct in relativity when F μ {\displaystyle F^{\mu }} is the four-force, m {\displaystyle m} is the invariant mass, and A μ {\displaystyle A^{\mu }} is the four-acceleration. The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below. === Quantum mechanics === Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability. === Quantum field theory === In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".: 199–128  While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. == Fundamental interactions == All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.: 12-11 : 359  The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. === Gravitational === Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact. Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force". === Electromagnetic === Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force. === Strong nuclear === There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.: 940  The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.: 232  === Weak nuclear === Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity.: 951  This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.: 201  == See also == Contact force – Force between two objects that are in physical contact Force control – Force control is given by the machine Force gauge – Instrument for measuring force Orders of magnitude (force) – Comparison of a wide range of physical forces Parallel force system – Situation in mechanical engineering Rigid body – Physical object which does not deform when forces or moments are exerted on it Specific force – Concept in physics == References == == External links == "Classical Mechanics, Week 2: Newton's Laws". MIT OpenCourseWare. Retrieved 2023-08-09. "Fundamentals of Physics I, Lecture 3: Newton's Laws of Motion". Open Yale Courses. Retrieved 2023-08-09.
Wikipedia/Physical_force
In fluid dynamics, drag, sometimes referred to as fluid resistance, is a force acting opposite to the direction of motion of any object moving with respect to a surrounding fluid. This can exist between two fluid layers, two solid surfaces, or between a fluid and a solid surface. Drag forces tend to decrease fluid velocity relative to the solid object in the fluid's path. Unlike other resistive forces, drag force depends on velocity. Drag force is proportional to the relative velocity for low-speed flow and is proportional to the velocity squared for high-speed flow. This distinction between low and high-speed flow is measured by the Reynolds number. Drag is instantaneously related to vorticity dynamics through the Josephson-Anderson relation. == Examples == Examples of drag include: Net aerodynamic or hydrodynamic force: Drag acting opposite to the direction of movement of a solid object such as cars, aircraft, and boat hulls. Viscous drag of fluid in a pipe: Drag force on the immobile pipe restricts the velocity of the fluid through the pipe. In the physics of sports, drag force is necessary to explain the motion of balls, javelins, arrows, and frisbees and the performance of runners and swimmers. For a top sprinter, overcoming drag can require 5% of their energy output. == Types == There are many distinct types of drag caused by different physical interactions between the object and fluid. Two types of drag are relevant for all objects: Form drag, which is caused by the pressure exerted on the object as the fluid flow goes around the object. Form drag is determined by the cross-sectional shape and area of the body. Skin friction drag (or viscous drag), which is caused by friction between the fluid and the surface of the object. The surface may be the outside of an object, such as a boat hull, or the inside of an object, such as the bore of a pipe. There are two types of which are primarily relevant for aircraft: Lift-induced drag appears with wings or a lifting body in aviation and with semi-planing or planing hulls for watercraft Wave drag (aerodynamics) is caused by the presence of shockwaves and first appears at subsonic aircraft speeds when local flow velocities become supersonic. The wave drag of the supersonic Concorde prototype aircraft was reduced at Mach 2 by 1.8% by applying the area rule which extended the rear fuselage 3.73 m (12.2 ft) on the production aircraft. Wave resistance affects watercraft: Wave resistance (ship hydrodynamics) occurs when a solid object is moving along a fluid boundary and making surface waves. Last, in aerodynamics the term "parasitic drag" is often used. Parasitic drag is the sum of form drag and skin friction drag and is entirely negative to an aircraft, in contrast with lift-induced drag which is a consequence of generating lift. === Comparison of form drag and skin friction === The effect of streamlining on the relative proportions of skin friction and form drag is shown in the table at right for an airfoil, which is a streamlined body, and a cylinder, which is a bluff body. Also shown is a flat plate in two different orientations, illustrating the effect of orientation on the relative proportions of skin friction and form drag, and showing the pressure difference between front and back. A body is known as bluff or blunt when the source of drag is dominated by pressure forces, and streamlined if the drag is dominated by viscous forces. For example, road vehicles are bluff bodies. For aircraft, pressure and friction drag are included in the definition of parasitic drag. Parasite drag is often expressed in terms of a hypothetical. === Lift-induced drag === Lift-induced drag (also called induced drag) is drag which occurs as the result of the creation of lift on a three-dimensional lifting body, such as the wing or propeller of an airplane. Induced drag consists primarily of two components: drag due to the creation of trailing vortices (vortex drag); and the presence of additional viscous drag (lift-induced viscous drag) that is not present when lift is zero. The trailing vortices in the flow-field, present in the wake of a lifting body, derive from the turbulent mixing of air from above and below the body which flows in slightly different directions as a consequence of creation of lift. With other parameters remaining the same, as the lift generated by a body increases, so does the lift-induced drag. This means that as the wing's angle of attack increases (up to a maximum called the stalling angle), the lift coefficient also increases, and so too does the lift-induced drag. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasite drag, increases due to the formation of turbulent unattached flow in the wake behind the body. === Parasitic drag === Parasitic drag, or profile drag, is the sum of viscous pressure drag (form drag) and drag due to surface roughness (skin friction drag). Additionally, the presence of multiple bodies in relative proximity may incur so called interference drag, which is sometimes described as a component of parasitic drag. In aeronautics the parasitic drag and lift-induced drag are often given separately. For an aircraft at low speed, induced drag tends to be relatively greater than parasitic drag because a high angle of attack is required to maintain lift, increasing induced drag. As speed increases, the angle of attack is reduced and the induced drag decreases. Parasitic drag, however, increases because the fluid is flowing more quickly around protruding objects increasing friction or drag. At even higher speeds (transonic), wave drag enters the picture. Each of these forms of drag changes in proportion to the others based on speed. The combined overall drag curve therefore shows a minimum at some airspeed - an aircraft flying at this speed will be at or close to its optimal efficiency. Pilots will use this speed to maximize endurance (minimum fuel consumption), or maximize gliding range in the event of an engine failure. The equivalent parasite area is the area which a flat plate perpendicular to the flow would have to match the parasite drag of an aircraft. It is a measure used when comparing the drag of different aircraft. For example, the Douglas DC-3 has an equivalent parasite area of 2.20 m2 (23.7 sq ft) and the McDonnell Douglas DC-9, with 30 years of advancement in aircraft design, an area of 1.91 m2 (20.6 sq ft) although it carried five times as many passengers. == The drag equation == Drag depends on the properties of the fluid and on the size, shape, and speed of the object. One way to express this is by means of the drag equation: F D = 1 2 ρ v 2 C D A {\displaystyle F_{\mathrm {D} }\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{\mathrm {D} }\,A} where F D {\displaystyle F_{\rm {D}}} is the drag force, ρ {\displaystyle \rho } is the density of the fluid, v {\displaystyle v} is the speed of the object relative to the fluid, A {\displaystyle A} is the cross sectional area, and C D {\displaystyle C_{\rm {D}}} is the drag coefficient – a dimensionless number. The drag coefficient depends on the shape of the object and on the Reynolds number R e = v D ν = ρ v D μ , {\displaystyle \mathrm {Re} ={\frac {vD}{\nu }}={\frac {\rho vD}{\mu }},} where D {\displaystyle D} is some characteristic diameter or linear dimension. Actually, D {\displaystyle D} is the equivalent diameter D e {\displaystyle D_{e}} of the object. For a sphere, D e {\displaystyle D_{e}} is the D of the sphere itself. For a rectangular shape cross-section in the motion direction, D e = 1.30 ⋅ ( a ⋅ b ) 0.625 ( a + b ) 0.25 {\displaystyle D_{e}=1.30\cdot {\frac {(a\cdot b)^{0.625}}{(a+b)^{0.25}}}} , where a and b are the rectangle edges. ν {\displaystyle {\nu }} is the kinematic viscosity of the fluid (equal to the dynamic viscosity μ {\displaystyle {\mu }} divided by the density ρ {\displaystyle {\rho }} ). At low R e {\displaystyle \mathrm {Re} } , C D {\displaystyle C_{\rm {D}}} is asymptotically proportional to R e − 1 {\displaystyle \mathrm {Re} ^{-1}} , which means that the drag is linearly proportional to the speed, i.e. the drag force on a small sphere moving through a viscous fluid is given by the Stokes Law: F d = 3 π μ D v {\displaystyle F_{\rm {d}}=3\pi \mu Dv} At high R e {\displaystyle \mathrm {Re} } , C D {\displaystyle C_{\rm {D}}} is more or less constant, but drag will vary as the square of the speed varies. The graph to the right shows how C D {\displaystyle C_{\rm {D}}} varies with R e {\displaystyle \mathrm {Re} } for the case of a sphere. Since the power needed to overcome the drag force is the product of the force times speed, the power needed to overcome drag will vary as the square of the speed at low Reynolds numbers, and as the cube of the speed at high numbers. It can be demonstrated that drag force can be expressed as a function of a dimensionless number, which is dimensionally identical to the Bejan number. Consequently, drag force and drag coefficient can be a function of Bejan number. In fact, from the expression of drag force it has been obtained: F d = Δ p A w = 1 2 C D A f ν μ l 2 R e L 2 {\displaystyle F_{\rm {d}}=\Delta _{\rm {p}}A_{\rm {w}}={\frac {1}{2}}C_{\rm {D}}A_{\rm {f}}{\frac {\nu \mu }{l^{2}}}\mathrm {Re} _{L}^{2}} and consequently allows expressing the drag coefficient C D {\displaystyle C_{\rm {D}}} as a function of Bejan number and the ratio between wet area A w {\displaystyle A_{\rm {w}}} and front area A f {\displaystyle A_{\rm {f}}} : C D = 2 A w A f B e R e L 2 {\displaystyle C_{\rm {D}}=2{\frac {A_{\rm {w}}}{A_{\rm {f}}}}{\frac {\mathrm {Be} }{\mathrm {Re} _{L}^{2}}}} where R e L {\displaystyle \mathrm {Re} _{L}} is the Reynolds number related to fluid path length L. == At high velocity == As mentioned, the drag equation with a constant drag coefficient gives the force moving through fluid a relatively large velocity, i.e. high Reynolds number, Re > ~1000. This is also called quadratic drag. F D = 1 2 ρ v 2 C D A , {\displaystyle F_{D}\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{D}\,A,} The derivation of this equation is presented at Drag equation § Derivation. The reference area A is often the orthographic projection of the object, or the frontal area, on a plane perpendicular to the direction of motion. For objects with a simple shape, such as a sphere, this is the cross sectional area. Sometimes a body is a composite of different parts, each with a different reference area (drag coefficient corresponding to each of those different areas must be determined). In the case of a wing, the reference areas are the same, and the drag force is in the same ratio as the lift force. Therefore, the reference for a wing is often the lifting area, sometimes referred to as "wing area" rather than the frontal area. For an object with a smooth surface, and non-fixed separation points (like a sphere or circular cylinder), the drag coefficient may vary with Reynolds number Re, up to extremely high values (Re of the order 107). For an object with well-defined fixed separation points, like a circular disk with its plane normal to the flow direction, the drag coefficient is constant for Re > 3,500. The further the drag coefficient Cd is, in general, a function of the orientation of the flow with respect to the object (apart from symmetrical objects like a sphere). === Power === Under the assumption that the fluid is not moving relative to the currently used reference system, the power required to overcome the aerodynamic drag is given by: P D = F D ⋅ v = 1 2 ρ v 3 A C D {\displaystyle P_{D}=\mathbf {F} _{D}\cdot \mathbf {v} ={\tfrac {1}{2}}\rho v^{3}AC_{D}} The power needed to push an object through a fluid increases as the cube of the velocity increases. For example, a car cruising on a highway at 50 mph (80 km/h) may require only 10 horsepower (7.5 kW) to overcome aerodynamic drag, but that same car at 100 mph (160 km/h) requires 80 hp (60 kW). With a doubling of speeds, the drag/force quadruples per the formula. Exerting 4 times the force over a fixed distance produces 4 times as much work. At twice the speed, the work (resulting in displacement over a fixed distance) is done twice as fast. Since power is the rate of doing work, 4 times the work done in half the time requires 8 times the power. When the fluid is moving relative to the reference system, for example, a car driving into headwind, the power required to overcome the aerodynamic drag is given by the following formula: P D = F D ⋅ v o = 1 2 C D A ρ ( v w + v o ) 2 v o {\displaystyle P_{D}=\mathbf {F} _{D}\cdot \mathbf {v_{o}} ={\tfrac {1}{2}}C_{D}A\rho (v_{w}+v_{o})^{2}v_{o}} Where v w {\displaystyle v_{w}} is the wind speed and v o {\displaystyle v_{o}} is the object speed (both relative to ground). === Velocity of a falling object === Velocity as a function of time for an object falling through a non-dense medium, and released at zero relative-velocity v = 0 at time t = 0, is roughly given by a function involving a hyperbolic tangent (tanh): v ( t ) = 2 m g ρ A C D tanh ⁡ ( t g ρ C D A 2 m ) . {\displaystyle v(t)={\sqrt {\frac {2mg}{\rho AC_{D}}}}\tanh \left(t{\sqrt {\frac {g\rho C_{D}A}{2m}}}\right).\,} The hyperbolic tangent has a limit value of one, for large time t. In other words, velocity asymptotically approaches a maximum value called the terminal velocity vt: v t = 2 m g ρ A C D . {\displaystyle v_{t}={\sqrt {\frac {2mg}{\rho AC_{D}}}}.\,} For an object falling and released at relative-velocity v = vi at time t = 0, with vi < vt, is also defined in terms of the hyperbolic tangent function: v ( t ) = v t tanh ⁡ ( t g v t + arctanh ⁡ ( v i v t ) ) . {\displaystyle v(t)=v_{t}\tanh \left(t{\frac {g}{v_{t}}}+\operatorname {arctanh} \left({\frac {v_{i}}{v_{t}}}\right)\right).\,} For vi > vt, the velocity function is defined in terms of the hyperbolic cotangent function: v ( t ) = v t coth ⁡ ( t g v t + coth − 1 ⁡ ( v i v t ) ) . {\displaystyle v(t)=v_{t}\coth \left(t{\frac {g}{v_{t}}}+\coth ^{-1}\left({\frac {v_{i}}{v_{t}}}\right)\right).\,} The hyperbolic cotangent also has a limit value of one, for large time t. Velocity asymptotically tends to the terminal velocity vt, strictly from above vt. For vi = vt, the velocity is constant: v ( t ) = v t . {\displaystyle v(t)=v_{t}.} These functions are defined by the solution of the following differential equation: g − ρ A C D 2 m v 2 = d v d t . {\displaystyle g-{\frac {\rho AC_{D}}{2m}}v^{2}={\frac {dv}{dt}}.\,} Or, more generically (where F(v) are the forces acting on the object beyond drag): 1 m ∑ F ( v ) − ρ A C D 2 m v 2 = d v d t . {\displaystyle {\frac {1}{m}}\sum F(v)-{\frac {\rho AC_{D}}{2m}}v^{2}={\frac {dv}{dt}}.\,} For a potato-shaped object of average diameter d and of density ρobj, terminal velocity is about v t = g d ρ o b j ρ . {\displaystyle v_{t}={\sqrt {gd{\frac {\rho _{obj}}{\rho }}}}.\,} For objects of water-like density (raindrops, hail, live objects—mammals, birds, insects, etc.) falling in air near Earth's surface at sea level, the terminal velocity is roughly equal to v t = 90 d , {\displaystyle v_{t}=90{\sqrt {d}},\,} with d in metres and vt in m/s. For example, for a human body ( d {\displaystyle d} ≈0.6 m) v t {\displaystyle v_{t}} ≈70 m/s, for a small animal like a cat ( d {\displaystyle d} ≈0.2 m) v t {\displaystyle v_{t}} ≈40 m/s, for a small bird ( d {\displaystyle d} ≈0.05 m) v t {\displaystyle v_{t}} ≈20 m/s, for an insect ( d {\displaystyle d} ≈0.01 m) v t {\displaystyle v_{t}} ≈9 m/s, and so on. Terminal velocity for very small objects (pollen, etc.) at low Reynolds numbers is determined by Stokes law. In short, terminal velocity is higher for larger creatures, and thus potentially more deadly. A creature such as a mouse falling at its terminal velocity is much more likely to survive impact with the ground than a human falling at its terminal velocity. == Low Reynolds numbers: Stokes' drag == The equation for viscous resistance or linear drag is appropriate for objects or particles moving through a fluid at relatively slow speeds (assuming there is no turbulence). Purely laminar flow only exists up to Re = 0.1 under this definition. In this case, the force of drag is approximately proportional to velocity. The equation for viscous resistance is: F D = − b v {\displaystyle \mathbf {F} _{D}=-b\mathbf {v} \,} where: b {\displaystyle b} is a constant that depends on both the material properties of the object and fluid, as well as the geometry of the object; and v {\displaystyle \mathbf {v} } is the velocity of the object. When an object falls from rest, its velocity will be v ( t ) = ( ρ − ρ 0 ) V g b ( 1 − e − b t / m ) {\displaystyle v(t)={\frac {(\rho -\rho _{0})\,V\,g}{b}}\left(1-e^{-b\,t/m}\right)} where: ρ {\displaystyle \rho } is the density of the object, ρ 0 {\displaystyle \rho _{0}} is density of the fluid, V {\displaystyle V} is the volume of the object, g {\displaystyle g} is the acceleration due to gravity (i.e., 9.8 m/s 2 {\displaystyle ^{2}} ), and m {\displaystyle m} is mass of the object. The velocity asymptotically approaches the terminal velocity v t = ( ρ − ρ 0 ) V g b {\displaystyle v_{t}={\frac {(\rho -\rho _{0})Vg}{b}}} . For a given b {\displaystyle b} , denser objects fall more quickly. For the special case of small spherical objects moving slowly through a viscous fluid (and thus at small Reynolds number), George Gabriel Stokes derived an expression for the drag constant: b = 6 π η r {\displaystyle b=6\pi \eta r\,} where r {\displaystyle r} is the Stokes radius of the particle, and η {\displaystyle \eta } is the fluid viscosity. The resulting expression for the drag is known as Stokes' drag: F D = − 6 π η r v . {\displaystyle \mathbf {F} _{D}=-6\pi \eta r\,\mathbf {v} .} For example, consider a small sphere with radius r {\displaystyle r} = 0.5 micrometre (diameter = 1.0 μm) moving through water at a velocity v {\displaystyle v} of 10 μm/s. Using 10−3 Pa·s as the dynamic viscosity of water in SI units, we find a drag force of 0.09 pN. This is about the drag force that a bacterium experiences as it swims through water. The drag coefficient of a sphere can be determined for the general case of a laminar flow with Reynolds numbers less than 2 ⋅ 10 5 {\displaystyle 2\cdot 10^{5}} using the following formula: C D = 24 R e + 4 R e + 0.4 ; R e < 2 ⋅ 10 5 {\displaystyle C_{D}={\frac {24}{Re}}+{\frac {4}{\sqrt {Re}}}+0.4~{\text{;}}~~~~~Re<2\cdot 10^{5}} For Reynolds numbers less than 1, Stokes' law applies and the drag coefficient approaches 24 R e {\displaystyle {\frac {24}{Re}}} ! == Aerodynamics == In aerodynamics, aerodynamic drag, also known as air resistance, is the fluid drag force that acts on any moving solid body in the direction of the air's freestream flow. From the body's perspective (near-field approach), the drag results from forces due to pressure distributions over the body surface, symbolized D p r {\displaystyle D_{pr}} . Forces due to skin friction, which is a result of viscosity, denoted D f {\displaystyle D_{f}} . Alternatively, calculated from the flow field perspective (far-field approach), the drag force results from three natural phenomena: shock waves, vortex sheet, and viscosity. === Overview of aerodynamics === When the airplane produces lift, another drag component results. Induced drag, symbolized D i {\displaystyle D_{i}} , is due to a modification of the pressure distribution due to the trailing vortex system that accompanies the lift production. An alternative perspective on lift and drag is gained from considering the change of momentum of the airflow. The wing intercepts the airflow and forces the flow to move downward. This results in an equal and opposite force acting upward on the wing which is the lift force. The change of momentum of the airflow downward results in a reduction of the rearward momentum of the flow which is the result of a force acting forward on the airflow and applied by the wing to the air flow; an equal but opposite force acts on the wing rearward which is the induced drag. Another drag component, namely wave drag, D w {\displaystyle D_{w}} , results from shock waves in transonic and supersonic flight speeds. The shock waves induce changes in the boundary layer and pressure distribution over the body surface. Therefore, there are three ways of categorizing drag.: 19  Pressure drag and friction drag Profile drag and induced drag Vortex drag, wave drag and wake drag The pressure distribution acting on a body's surface exerts normal forces on the body. Those forces can be added together and the component of that force that acts downstream represents the drag force, D p r {\displaystyle D_{pr}} . The nature of these normal forces combines shock wave effects, vortex system generation effects, and wake viscous mechanisms. Viscosity of the fluid has a major effect on drag. In the absence of viscosity, the pressure forces acting to hinder the vehicle are canceled by a pressure force further aft that acts to push the vehicle forward; this is called pressure recovery and the result is that the drag is zero. That is to say, the work the body does on the airflow is reversible and is recovered as there are no frictional effects to convert the flow energy into heat. Pressure recovery acts even in the case of viscous flow. Viscosity, however results in pressure drag and it is the dominant component of drag in the case of vehicles with regions of separated flow, in which the pressure recovery is infective. The friction drag force, which is a tangential force on the aircraft surface, depends substantially on boundary layer configuration and viscosity. The net friction drag, D f {\displaystyle D_{f}} , is calculated as the downstream projection of the viscous forces evaluated over the body's surface. The sum of friction drag and pressure (form) drag is called viscous drag. This drag component is due to viscosity. === History === The idea that a moving body passing through air or another fluid encounters resistance had been known since the time of Aristotle. According to Mervyn O'Gorman, this was named "drag" by Archibald Reith Low. Louis Charles Breguet's paper of 1922 began efforts to reduce drag by streamlining. Breguet went on to put his ideas into practice by designing several record-breaking aircraft in the 1920s and 1930s. Ludwig Prandtl's boundary layer theory in the 1920s provided the impetus to minimise skin friction. A further major call for streamlining was made by Sir Melvill Jones who provided the theoretical concepts to demonstrate emphatically the importance of streamlining in aircraft design. In 1929 his paper 'The Streamline Airplane' presented to the Royal Aeronautical Society was seminal. He proposed an ideal aircraft that would have minimal drag which led to the concepts of a 'clean' monoplane and retractable undercarriage. The aspect of Jones's paper that most shocked the designers of the time was his plot of the horse power required versus velocity, for an actual and an ideal plane. By looking at a data point for a given aircraft and extrapolating it horizontally to the ideal curve, the velocity gain for the same power can be seen. When Jones finished his presentation, a member of the audience described the results as being of the same level of importance as the Carnot cycle in thermodynamics. === Power curve in aviation === The interaction of parasitic and induced drag vs. airspeed can be plotted as a characteristic curve, illustrated here. In aviation, this is often referred to as the power curve, and is important to pilots because it shows that, below a certain airspeed, maintaining airspeed counterintuitively requires more thrust as speed decreases, rather than less. The consequences of being "behind the curve" in flight are important and are taught as part of pilot training. At the subsonic airspeeds where the "U" shape of this curve is significant, wave drag has not yet become a factor, and so it is not shown in the curve. === Wave drag in transonic and supersonic flow === Wave drag, sometimes referred to as compressibility drag, is drag that is created when a body moves in a compressible fluid and at the speed that is close to the speed of sound in that fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight. In transonic flight, wave drag is the result of the formation of shockwaves in the fluid, formed when local areas of supersonic (Mach number greater than 1.0) flow are created. In practice, supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air increases as it accelerates over the body to speeds above Mach 1.0. However, full supersonic flow over the vehicle will not develop until well past Mach 1.0. Aircraft flying at transonic speed often incur wave drag through the normal course of operation. In transonic flight, wave drag is commonly referred to as transonic compressibility drag. Transonic compressibility drag increases significantly as the speed of flight increases towards Mach 1.0, dominating other forms of drag at those speeds. In supersonic flight (Mach numbers greater than 1.0), wave drag is the result of shockwaves present in the fluid and attached to the body, typically oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows, or in bodies with turning angles sufficiently large, unattached shockwaves, or bow waves will instead form. Additionally, local areas of transonic flow behind the initial shockwave may occur at lower supersonic speeds, and can lead to the development of additional, smaller shockwaves present on the surfaces of other lifting bodies, similar to those found in transonic flows. In supersonic flow regimes, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag. The closed form solution for the minimum wave drag of a body of revolution with a fixed length was found by Sears and Haack, and is known as the Sears-Haack Distribution. Similarly, for a fixed volume, the shape for minimum wave drag is the Von Karman Ogive. The Busemann biplane theoretical concept is not subject to wave drag when operated at its design speed, but is incapable of generating lift in this condition. == d'Alembert's paradox == In 1752 d'Alembert proved that potential flow, the 18th century state-of-the-art inviscid flow theory amenable to mathematical solutions, resulted in the prediction of zero drag. This was in contradiction with experimental evidence, and became known as d'Alembert's paradox. In the 19th century the Navier–Stokes equations for the description of viscous flow were developed by Saint-Venant, Navier and Stokes. Stokes derived the drag around a sphere at very low Reynolds numbers, the result of which is called Stokes' law. In the limit of high Reynolds numbers, the Navier–Stokes equations approach the inviscid Euler equations, of which the potential-flow solutions considered by d'Alembert are solutions. However, all experiments at high Reynolds numbers showed there is drag. Attempts to construct inviscid steady flow solutions to the Euler equations, other than the potential flow solutions, did not result in realistic results. The notion of boundary layers—introduced by Prandtl in 1904, founded on both theory and experiments—explained the causes of drag at high Reynolds numbers. The boundary layer is the thin layer of fluid close to the object's boundary, where viscous effects remain important even when the viscosity is very small (or equivalently the Reynolds number is very large). == See also == == References == 'Improved Empirical Model for Base Drag Prediction on Missile Configurations, based on New Wind Tunnel Data', Frank G Moore et al. NASA Langley Center 'Computational Investigation of Base Drag Reduction for a Projectile at Different Flight Regimes', M A Suliman et al. Proceedings of 13th International Conference on Aerospace Sciences & Aviation Technology, ASAT- 13, May 26 – 28, 2009 'Base Drag and Thick Trailing Edges', Sighard F. Hoerner, Air Materiel Command, in: Journal of the Aeronautical Sciences, Oct 1950, pp 622–628 == Bibliography == French, A. P. (1970). Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). W. W. White & Company Inc., New York. ISBN 978-0-393-09958-4. G. Falkovich (2011). Fluid Mechanics (A short course for physicists). Cambridge University Press. ISBN 978-1-107-00575-4. Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4. Huntley, H. E. (1967). Dimensional Analysis. LOC 67-17978. Batchelor, George (2000). An introduction to fluid dynamics. Cambridge Mathematical Library (2nd ed.). Cambridge University Press. ISBN 978-0-521-66396-0. MR 1744638. L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. ISBN 978-0-273-01120-0 Anderson, John D. Jr. (2000); Introduction to Flight, Fourth Edition, McGraw Hill Higher Education, Boston, Massachusetts, USA. 8th ed. 2015, ISBN 978-0078027673. == External links == Educational materials on air resistance Aerodynamic Drag and its effect on the acceleration and top speed of a vehicle. Vehicle Aerodynamic Drag calculator based on drag coefficient, frontal area and speed. Smithsonian National Air and Space Museum's How Things Fly website Effect of dimples on a golf ball and a car
Wikipedia/Drag_(physics)
The pound of force or pound-force (symbol: lbf, sometimes lbf,) is a unit of force used in some systems of measurement, including English Engineering units and the foot–pound–second system. Pound-force should not be confused with pound-mass (lb), often simply called "pound", which is a unit of mass; nor should these be confused with foot-pound (ft⋅lbf), a unit of energy, or pound-foot (lbf⋅ft), a unit of torque. == Definitions == The pound-force is equal to the gravitational force exerted on a mass of one avoirdupois pound on the surface of Earth. Since the 18th century, the unit has been used in low-precision measurements, for which small changes in Earth's gravity (which varies from equator to pole by up to half a percent) can safely be neglected. The 20th century, however, brought the need for a more precise definition, requiring a standardized value for acceleration due to gravity. === Product of avoirdupois pound and standard gravity === The pound-force is the product of one avoirdupois pound (exactly 0.45359237 kg) and the standard acceleration due to gravity, approximately 32.174049 ft/s2 (9.80665 m/s2). The standard values of acceleration of the standard gravitational field (gn) and the international avoirdupois pound (lb) result in a pound-force equal to 32.174049 ⁠ft⋅lb/s2⁠ (4.4482216152605 N). 1 lbf = 1 lb × g n = 1 lb × 9.80665 m s 2 / 0.3048 m ft ≈ 1 lb × 32.174049 f t s 2 ≈ 32.174049 f t ⋅ l b s 2 1 lbf = 1 lb × 0.45359237 kg lb × g n = 0.45359237 kg × 9.80665 m s 2 = 4.4482216152605 N {\displaystyle {\begin{aligned}1\,{\text{lbf}}&=1\,{\text{lb}}\times g_{\text{n}}\\&=1\,{\text{lb}}\times 9.80665\,{\tfrac {\text{m}}{{\text{s}}^{2}}}/0.3048\,{\tfrac {\text{m}}{\text{ft}}}\\&\approx 1\,{\text{lb}}\times 32.174049\,\mathrm {\tfrac {ft}{s^{2}}} \\&\approx 32.174049\,\mathrm {\tfrac {ft{\cdot }lb}{s^{2}}} \\1\,{\text{lbf}}&=1\,{\text{lb}}\times 0.45359237\,{\tfrac {\text{kg}}{\text{lb}}}\times g_{\text{n}}\\&=0.45359237\,{\text{kg}}\times 9.80665\,{\tfrac {\text{m}}{{\text{s}}^{2}}}\\&=4.4482216152605\,{\text{N}}\end{aligned}}} This definition can be rephrased in terms of the slug. A slug has a mass of 32.174049 lb. A pound-force is the amount of force required to accelerate a slug at a rate of 1 ft/s2, so: 1 lbf = 1 slug × 1 ft s 2 = 1 slug ⋅ ft s 2 {\displaystyle {\begin{aligned}1\,{\text{lbf}}&=1\,{\text{slug}}\times 1\,{\tfrac {\text{ft}}{{\text{s}}^{2}}}\\&=1\,{\tfrac {{\text{slug}}\cdot {\text{ft}}}{{\text{s}}^{2}}}\end{aligned}}} == Conversion to other units == == Foot–pound–second (FPS) systems of units == In some contexts, the term "pound" is used almost exclusively to refer to the unit of force and not the unit of mass. In those applications, the preferred unit of mass is the slug, i.e. lbf⋅s2/ft. In other contexts, the unit "pound" refers to a unit of mass. The international standard symbol for the pound as a unit of mass is lb. In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's second law, F = m ⋅ a, requires another factor, gc, usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s2). "Absolute" systems are coherent systems of units: by using the slug as the unit of mass, the "gravitational" FPS system (left column) avoids the need for such a constant. The SI is an "absolute" metric system with kilogram and meter as base units. == Pound of thrust == The term pound of thrust is an alternative name for pound-force in specific contexts. It is frequently seen in US sources on jet engines and rocketry, some of which continue to use the FPS notation. For example, the thrust produced by each of the Space Shuttle's two Solid Rocket Boosters was 3,300,000 pounds-force (14.7 MN), together 6,600,000 pounds-force (29.4 MN). == See also == == Notes and references == == General sources == Obert, Edward F. (1948). Thermodynamics. New York: D. J. Leggett Book Company. Chapter I "Survey of Dimensions and Units", pp. 1-24.
Wikipedia/Pound-force
Reviews of Modern Physics (often abbreviated RMP) is a quarterly peer-reviewed scientific journal published by the American Physical Society. It was established in 1929 and the current editor-in-chief is Michael Thoennessen. The journal publishes review articles, usually by established researchers, on all aspects of physics and related fields. The reviews are usually accessible to non-specialists and serve as introductory material to graduate students, which survey recent work, discuss key problems to be solved and provide perspectives toward the end. The journal has published several historically significant papers on quantum foundations, as well as the development of the Standard Model of particle physics. == References == == External links == Official website
Wikipedia/Reviews_of_Modern_Physics
The kilogram-force (kgf or kgF), or kilopond (kp, from Latin: pondus, lit. 'weight'), is a non-standard gravitational metric unit of force. It is not accepted for use with the International System of Units (SI) and is deprecated for most uses. The kilogram-force is equal to the magnitude of the force exerted on one kilogram of mass in a 9.80665 m/s2 gravitational field (standard gravity, a conventional value approximating the average magnitude of gravity on Earth). That is, it is the weight of a kilogram under standard gravity. One kilogram-force is defined as 9.80665 N. Similarly, a gram-force is 9.80665 mN, and a milligram-force is 9.80665 μN. == History == The gram-force and kilogram-force were never well-defined units until the CGPM adopted a standard acceleration of gravity of 9.80665 m/s2 for this purpose in 1901, though they had been used in low-precision measurements of force before that time. Even then, the proposal to define kilogram-force as a standard unit of force was explicitly rejected. Instead, the newton was proposed in 1913 and accepted in 1948. The kilogram-force has never been a part of the International System of Units (SI), which was introduced in 1960. The SI unit of force is the newton. Prior to this, the units were widely used in much of the world. They are still in use for some purposes; for example, they are used to specify tension of bicycle spokes, draw weight of bows in archery, and tensile strength of electronics bond wire, for informal references to pressure (as the technically incorrect kilogram per square centimetre, omitting -force, the kilogram-force per square centimetre being the technical atmosphere, the value of which is very near those of both the bar and the standard atmosphere), and to define the "metric horsepower" (PS) as 75 metre-kiloponds per second. In addition, the kilogram force was the standard unit used for Vickers hardness testing. In 1940s, Germany, the thrust of a rocket engine was measured in kilograms-force, in the Soviet Union it remained the primary unit for thrust in the Russian space program until at least the late 1980s. Dividing the thrust in kilograms-force on the mass of an engine or a rocket in kilograms conveniently gives the thrust to weight ratio, dividing the thrust on propellant consumption rate (mass flow rate) in kilograms per second gives the specific impulse in seconds. The term "kilopond" has been declared obsolete. == Related units == The tonne-force, metric ton-force, megagram-force, and megapond (Mp) are each 1000 kilograms-force. The decanewton or dekanewton (daN), exactly 10 N, is used in some fields as an approximation to the kilogram-force, because it is close to the 9.80665 N of 1 kgf. The gram-force is 1⁄1000 of a kilogram-force. == See also == Metrology Avoirdupois == References ==
Wikipedia/Kilogram-force
Quantum Field Theory in a Nutshell is a textbook by Anthony Zee covering quantum field theory. The book has been adopted by many universities, including Harvard University, Princeton University, the University of California, Berkeley, the California Institute of Technology, Columbia University, Stanford University, and Brown University, among others. == Response == Stephen Barr said about the book, "Like the famous Feynman Lectures on Physics, this book has the flavor of a good blackboard lecture". Michael Peskin's review in Classical and Quantum Gravity said, "This is quantum field theory taught at the knee of an eccentric uncle; one who loves the grandeur of his subject, has a keen eye for a slick argument, and is eager to share his repertoire of anecdotes about Feynman, Fermi, and all of his heroes [...] This [book] can help [students] love the subject and race to its frontier". David Tong called it a "charming book, where emphasis is placed on physical understanding and the author isn’t afraid to hide the ugly truth when necessary. It contains many gems". Zvi Bern wrote, "Zee has an infectious enthusiasm and a remarkable talent for slicing through technical mumbo jumbo to arrive at the heart of a problem". == References ==
Wikipedia/Quantum_Field_Theory_in_a_Nutshell
In mathematics and physics, a scalar field is a function associating a single number to each point in a region of space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units). In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory. == Definition == Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U. The region U may be a set in some Euclidean space, Minkowski space, or more generally a subset of a manifold, and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form. Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields. == Uses in physics == In physics, scalar fields often describe the potential energy associated with a particular force. The force is a vector field, which can be obtained as a factor of the gradient of the potential energy scalar field. Examples include: Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces. A temperature, humidity, or pressure field, such as those used in meteorology. === Examples in quantum theory and relativity === In quantum field theory, a scalar field is associated with spin-0 particles. The scalar field may be real or complex valued. Complex scalar fields represent charged particles. These include the Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction. In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the spontaneous symmetry breaking. This mechanism is known as the Higgs mechanism. A candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field. Scalar–tensor theories represent the gravitational interaction through both a tensor and a scalar. Such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory and the Brans–Dicke theory. Scalar fields like the Higgs field can be found within scalar–tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like (short-ranged) with the particles that get mass through it. Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor. Scalar fields are hypothesized to have caused the high accelerated expansion of the early universe (inflation), helping to solve the horizon problem and giving a hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless (i.e. long-ranged) scalar fields in this context are known as inflatons. Massive (i.e. short-ranged) scalar fields are proposed, too, using for example Higgs-like fields. == Other kinds of fields == Vector fields, which associate a vector to every point in space. Some examples of vector fields include the air flow (wind) in meteorology. Tensor fields, which associate a tensor to every point in space. For example, in general relativity gravitation is associated with the tensor field called Einstein tensor. In Kaluza–Klein theory, spacetime is extended to five dimensions and its Riemann curvature tensor can be separated out into ordinary four-dimensional gravitation plus an extra set, which is equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the "dilaton". (The dilaton scalar is also found among the massless bosonic fields in string theory.) == See also == Scalar field theory Vector boson Vector-valued function == References ==
Wikipedia/Scalar_function
In physics, a force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. In mechanics, force makes ideas like 'pushing' or 'pulling' mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F. Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In the case of multiple forces, if the net force on an extended body is zero the body is in equilibrium. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. == Development of the concept == Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.: 2–10 : 79  High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. == Pre-Newtonian concepts == Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force). == Newtonian mechanics == Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. === First law === Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.: 1–7  === Second law === According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.: 204–207  A modern statement of Newton's second law is a vector equation: F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} where p {\displaystyle \mathbf {p} } is the momentum of the system, and F {\displaystyle \mathbf {F} } is the net (vector sum) force.: 399  If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, F = d p d t = d ( m v ) d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}={\frac {\mathrm {d} \left(m\mathbf {v} \right)}{\mathrm {d} t}},} where m is the mass and v {\displaystyle \mathbf {v} } is the velocity.: 9-1,9-2  If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes F = m d v d t . {\displaystyle \mathbf {F} =m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}.} By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} .} === Third law === Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} is the force of body 1 on body 2 and F 2 , 1 {\displaystyle \mathbf {F} _{2,1}} that of body 2 on body 1, then F 1 , 2 = − F 2 , 1 . {\displaystyle \mathbf {F} _{1,2}=-\mathbf {F} _{2,1}.} This law is sometimes referred to as the action-reaction law, with F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} called the action and − F 2 , 1 {\displaystyle -\mathbf {F} _{2,1}} the reaction. Newton's third law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: F 1 , 2 + F 2 , 1 = 0. {\displaystyle \mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.: 19-1  Combining Newton's second and third laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p 1 {\displaystyle \mathbf {p} _{1}} is the momentum of object 1 and p 2 {\displaystyle \mathbf {p} _{2}} the momentum of object 2, then d p 1 d t + d p 2 d t = F 1 , 2 + F 2 , 1 = 0. {\displaystyle {\frac {\mathrm {d} \mathbf {p} _{1}}{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {p} _{2}}{\mathrm {d} t}}=\mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.: ch.12  === Defining "force" === Some textbooks use Newton's second law as a definition of force. However, for the equation F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } for a constant mass m {\displaystyle m} to then have any predictive content, it must be combined with further information.: 12-1  Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference.: 59  The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways,: vii  which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. == Combining forces == Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.: 197  Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.: ch.12  Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.: ch.12  === Equilibrium === When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.: 566  Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.: 566  ==== Static ==== Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his three laws of motion.: ch.12  ==== Dynamic ==== Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.: ch.12  == Examples of forces in classical mechanics == Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. === Gravitational force or Gravity === What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g {\displaystyle \mathbf {g} } and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\displaystyle m} will experience a force: F = m g . {\displaystyle \mathbf {F} =m\mathbf {g} .} For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.: ch.12  Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\displaystyle m_{\oplus }} ) and the radius ( R ⊕ {\displaystyle R_{\oplus }} ) of the Earth to the gravitational acceleration: g = − G m ⊕ R ⊕ 2 r ^ , {\displaystyle \mathbf {g} =-{\frac {Gm_{\oplus }}{{R_{\oplus }}^{2}}}{\hat {\mathbf {r} }},} where the vector direction is given by r ^ {\displaystyle {\hat {\mathbf {r} }}} , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant G {\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\displaystyle m_{2}} is F = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where r {\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the Solar System until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. === Electromagnetic === The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges.: 519  The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.: 4-6–4-8  Thus the electric field anywhere in space is defined as E = F q , {\displaystyle \mathbf {E} ={\mathbf {F} \over {q}},} where q {\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\displaystyle q} due to electric and magnetic fields: F = q ( E + v × B ) , {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} where F {\displaystyle \mathbf {F} } is the electromagnetic force, E {\displaystyle \mathbf {E} } is the electric field at the body's location, B {\displaystyle \mathbf {B} } is the magnetic field, and v {\displaystyle \mathbf {v} } is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.: 482  The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. === Normal === When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects.: 264  The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.: ch.12  === Friction === Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.: 267  The static friction force ( F s f {\displaystyle \mathbf {F} _{\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\displaystyle \mu _{\mathrm {sf} }} ) multiplied by the normal force ( F N {\displaystyle \mathbf {F} _{\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality: 0 ≤ F s f ≤ μ s f F N . {\displaystyle 0\leq \mathbf {F} _{\mathrm {sf} }\leq \mu _{\mathrm {sf} }\mathbf {F} _{\mathrm {N} }.} The kinetic friction force ( F k f {\displaystyle F_{\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: F k f = μ k f F N , {\displaystyle \mathbf {F} _{\mathrm {kf} }=\mu _{\mathrm {kf} }\mathbf {F} _{\mathrm {N} },} where μ k f {\displaystyle \mu _{\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.: 267–271  === Tension === Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.: ch.12  === Spring === A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\displaystyle \Delta x} is the displacement, the force exerted by an ideal spring equals: F = − k Δ x , {\displaystyle \mathbf {F} =-k\Delta \mathbf {x} ,} where k {\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.: ch.12  === Centripetal === For an object in uniform circular motion, the net force acting on the object equals: F = − m v 2 r r ^ , {\displaystyle \mathbf {F} =-{\frac {mv^{2}}{r}}{\hat {\mathbf {r} }},} where m {\displaystyle m} is the mass of the object, v {\displaystyle v} is the velocity of the object and r {\displaystyle r} is the distance to the center of the circular path and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.: ch.12  === Continuum mechanics === Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: F V = − ∇ P , {\displaystyle {\frac {\mathbf {F} }{V}}=-\mathbf {\nabla } P,} where V {\displaystyle V} is the volume of the object in the fluid and P {\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.: ch.12  A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: F d = − b v , {\displaystyle \mathbf {F} _{\mathrm {d} }=-b\mathbf {v} ,} where: b {\displaystyle b} is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and v {\displaystyle \mathbf {v} } is the velocity of the object.: ch.12  More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as σ = F A , {\displaystyle \sigma ={\frac {F}{A}},} where A {\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.: 133–134 : 38-1–38-11  === Fictitious === There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.: ch.12  Because these forces are not genuine they are also referred to as "pseudo forces".: 12-11  In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. == Concepts derived from force == === Rotation and torque === Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F {\displaystyle \mathbf {F} } is defined relative to an arbitrary reference point as the cross product: τ = r × F , {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,} where r {\displaystyle \mathbf {r} } is the position vector of the force application point relative to the reference point.: 497  Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: τ = I α , {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},} where I {\displaystyle I} is the moment of inertia of the body α {\displaystyle {\boldsymbol {\alpha }}} is the angular acceleration of the body.: 502  This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.: 96–113  Equivalently, the differential form of Newton's second law provides an alternative definition of torque: τ = d L d t , {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {dt} }},} where L {\displaystyle \mathbf {L} } is the angular momentum of the particle. Newton's third law of motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. === Yank === The yank is defined as the rate of change of force: 131  Y = d F d t {\displaystyle \mathbf {Y} ={\frac {\mathrm {d} \mathbf {F} }{\mathrm {d} t}}} The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. === Kinematic integrals === Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}{\mathbf {F} \,\mathrm {d} t},} which by Newton's second law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:: 13-3  W = ∫ x 1 x 2 F ⋅ d x , {\displaystyle W=\int _{\mathbf {x} _{1}}^{\mathbf {x} _{2}}{\mathbf {F} \cdot {\mathrm {d} \mathbf {x} }},} which is equivalent to changes in kinetic energy (yielding the work energy theorem).: 13-3  Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x {\displaystyle d\mathbf {x} } in a time interval dt:: 13-2  d W = d W d x ⋅ d x = F ⋅ d x , {\displaystyle \mathrm {d} W={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot \mathrm {d} \mathbf {x} =\mathbf {F} \cdot \mathrm {d} \mathbf {x} ,} so P = d W d t = d W d x ⋅ d x d t = F ⋅ v , {\displaystyle P={\frac {\mathrm {d} W}{\mathrm {d} t}}={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}}=\mathbf {F} \cdot \mathbf {v} ,} with v = d x / d t {\displaystyle \mathbf {v} =\mathrm {d} \mathbf {x} /\mathrm {d} t} the velocity. === Potential energy === Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r ) {\displaystyle U(\mathbf {r} )} is defined as that field whose gradient is equal and opposite to the force produced at every point: F = − ∇ U . {\displaystyle \mathbf {F} =-\mathbf {\nabla } U.} Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.: ch.12  === Conservation === A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.: ch.12  Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r {\displaystyle \mathbf {r} } emanating from spherically symmetric potentials. Examples of this follow: For gravity: F g = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{g}}=-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where G {\displaystyle G} is the gravitational constant, and m n {\displaystyle m_{n}} is the mass of object n. For electrostatic forces: F e = q 1 q 2 4 π ε 0 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{e}}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}r^{2}}}{\hat {\mathbf {r} }},} where ε 0 {\displaystyle \varepsilon _{0}} is electric permittivity of free space, and q n {\displaystyle q_{n}} is the electric charge of object n. For spring forces: F s = − k r r ^ , {\displaystyle \mathbf {F} _{\text{s}}=-kr{\hat {\mathbf {r} }},} where k {\displaystyle k} is the spring constant.: ch.12  For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.: ch.12  The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.: ch.12  == Units == The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. See also Ton-force. == Revisions of the force concept == At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly. === Special theory of relativity === In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's second law, F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} remains valid because it is a mathematical definition.: 855–876  But for momentum to be conserved at relativistic relative velocity, v {\displaystyle v} , momentum must be redefined as: p = m 0 v 1 − v 2 / c 2 , {\displaystyle \mathbf {p} ={\frac {m_{0}\mathbf {v} }{\sqrt {1-v^{2}/c^{2}}}},} where m 0 {\displaystyle m_{0}} is the rest mass and c {\displaystyle c} the speed of light. The expression relating force and acceleration for a particle with constant non-zero rest mass m {\displaystyle m} moving in the x {\displaystyle x} direction at velocity v {\displaystyle v} is:: 216  F = ( γ 3 m a x , γ m a y , γ m a z ) , {\displaystyle \mathbf {F} =\left(\gamma ^{3}ma_{x},\gamma ma_{y},\gamma ma_{z}\right),} where γ = 1 1 − v 2 / c 2 . {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}.} is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\displaystyle c} .: 26 : §15–8  If v {\displaystyle v} is very small compared to c {\displaystyle c} , then γ {\displaystyle \gamma } is very close to 1 and F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } is a close approximation. Even for use in relativity, one can restore the form of F μ = m A μ {\displaystyle F^{\mu }=mA^{\mu }} through the use of four-vectors. This relation is correct in relativity when F μ {\displaystyle F^{\mu }} is the four-force, m {\displaystyle m} is the invariant mass, and A μ {\displaystyle A^{\mu }} is the four-acceleration. The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below. === Quantum mechanics === Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability. === Quantum field theory === In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".: 199–128  While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. == Fundamental interactions == All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.: 12-11 : 359  The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. === Gravitational === Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact. Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force". === Electromagnetic === Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force. === Strong nuclear === There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.: 940  The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.: 232  === Weak nuclear === Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity.: 951  This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.: 201  == See also == Contact force – Force between two objects that are in physical contact Force control – Force control is given by the machine Force gauge – Instrument for measuring force Orders of magnitude (force) – Comparison of a wide range of physical forces Parallel force system – Situation in mechanical engineering Rigid body – Physical object which does not deform when forces or moments are exerted on it Specific force – Concept in physics == References == == External links == "Classical Mechanics, Week 2: Newton's Laws". MIT OpenCourseWare. Retrieved 2023-08-09. "Fundamentals of Physics I, Lecture 3: Newton's Laws of Motion". Open Yale Courses. Retrieved 2023-08-09.
Wikipedia/Yank_(physics)
In engineering, a parallel force system is a type of force system where in all forces are oriented along one axis. An example of this type of system is a see saw. In a see saw, the children apply the two forces at the ends, and the fulcrum in the middle gives the counter force to maintain the see saw in a neutral position. Another example are the major vertical forces on an airplane in flight. == References == Kumar, K.L. (2003). Engineering Mechanics,3e (in Dutch). McGraw-Hill Education (India) Pvt Limited. p. 90. ISBN 978-0-07-049473-2. Retrieved September 10, 2017. Blake, Alexander (1985). Handbook of Mechanics, Materials, and Structures. John Wiley & Sons. p. 128. ISBN 9780471862390. R.K. Bansal (December 2005). A Textbook of Engineering Mechanics. Laxmi Publications. p. 25. ISBN 978-81-7008-305-4. M. N. Shesha Prakash; Ganesh B. Mogaveer (30 July 2014). Elements of Civil Engineering and Engineering Mechanics. PHI Learning Pvt. Ltd. p. 25. ISBN 978-81-203-5001-4.
Wikipedia/Parallel_force_system
In statistical mechanics, a microstate is a specific configuration of a system that describes the precise positions and momenta of all the individual particles or components that make up the system. Each microstate has a certain probability of occurring during the course of the system's thermal fluctuations. In contrast, the macrostate of a system refers to its macroscopic properties, such as its temperature, pressure, volume and density. Treatments on statistical mechanics define a macrostate as follows: a particular set of values of energy, the number of particles, and the volume of an isolated thermodynamic system is said to specify a particular macrostate of it. In this description, microstates appear as different possible ways the system can achieve a particular macrostate. A macrostate is characterized by a probability distribution of possible states across a certain statistical ensemble of all microstates. This distribution describes the probability of finding the system in a certain microstate. In the thermodynamic limit, the microstates visited by a macroscopic system during its fluctuations all have the same macroscopic properties. In a quantum system, the microstate is simply the value of the wave function. == Microscopic definitions of thermodynamic concepts == Statistical mechanics links the empirical thermodynamic properties of a system to the statistical distribution of an ensemble of microstates. All macroscopic thermodynamic properties of a system may be calculated from the partition function that sums exp ( − E i / k B T ) {\displaystyle {\text{exp}}(-E_{i}/k_{\text{B}}T)} of all its microstates. At any moment a system is distributed across an ensemble of Ω {\displaystyle \Omega } microstates, each labeled by i {\displaystyle i} , and having a probability of occupation p i {\displaystyle p_{i}} , and an energy E i {\displaystyle E_{i}} . If the microstates are quantum-mechanical in nature, then these microstates form a discrete set as defined by quantum statistical mechanics, and E i {\displaystyle E_{i}} is an energy level of the system. === Internal energy === The internal energy of the macrostate is the mean over all microstates of the system's energy U := ⟨ E ⟩ = ∑ i = 1 Ω p i E i {\displaystyle U\,:=\,\langle E\rangle \,=\,\sum \limits _{i=1}^{\Omega }p_{i}\,E_{i}} This is a microscopic statement of the notion of energy associated with the first law of thermodynamics. === Entropy === For the more general case of the canonical ensemble, the absolute entropy depends exclusively on the probabilities of the microstates and is defined as S := − k B ∑ i = 1 Ω p i ln ⁡ ( p i ) {\displaystyle S\,:=\,-k_{\text{B}}\sum \limits _{i=1}^{\Omega }p_{i}\,\ln(p_{i})} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant. For the microcanonical ensemble, consisting of only those microstates with energy equal to the energy of the macrostate, this simplifies to S = k B ln ⁡ Ω {\displaystyle S=k_{B}\,\ln \Omega } with the number of microstates Ω = 1 / p i {\displaystyle \Omega =1/p_{i}} . This form for entropy appears on Ludwig Boltzmann's gravestone in Vienna. The second law of thermodynamics describes how the entropy of an isolated system changes in time. The third law of thermodynamics is consistent with this definition, since zero entropy means that the macrostate of the system reduces to a single microstate. === Heat and work === Heat and work can be distinguished if we take the underlying quantum nature of the system into account. For a closed system (no transfer of matter), heat in statistical mechanics is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the quantum energy levels of the system, without change in the values of the energy levels themselves. Work is the energy transfer associated with an ordered, macroscopic action on the system. If this action acts very slowly, then the adiabatic theorem of quantum mechanics implies that this will not cause jumps between energy levels of the system. In this case, the internal energy of the system only changes due to a change of the system's energy levels. The microscopic, quantum definitions of heat and work are the following: δ W = ∑ i = 1 N p i d E i δ Q = ∑ i = 1 N E i d p i {\displaystyle {\begin{aligned}\delta W&=\sum _{i=1}^{N}p_{i}\,dE_{i}\\\delta Q&=\sum _{i=1}^{N}E_{i}\,dp_{i}\end{aligned}}} so that d U = δ W + δ Q . {\displaystyle ~dU=\delta W+\delta Q.} The two above definitions of heat and work are among the few expressions of statistical mechanics where the thermodynamic quantities defined in the quantum case find no analogous definition in the classical limit. The reason is that classical microstates are not defined in relation to a precise associated quantum microstate, which means that when work changes the total energy available for distribution among the classical microstates of the system, the energy levels (so to speak) of the microstates do not follow this change. == The microstate in phase space == === Classical phase space === The description of a classical system of F degrees of freedom may be stated in terms of a 2F dimensional phase space, whose coordinate axes consist of the F generalized coordinates qi of the system, and its F generalized momenta pi. The microstate of such a system will be specified by a single point in the phase space. But for a system with a huge number of degrees of freedom its exact microstate usually is not important. So the phase space can be divided into cells of the size h0 = ΔqiΔpi, each treated as a microstate. Now the microstates are discrete and countable and the internal energy U has no longer an exact value but is between U and U+δU, with δ U ≪ U {\textstyle \delta U\ll U} . The number of microstates Ω that a closed system can occupy is proportional to its phase space volume: Ω ( U ) = 1 h 0 F ∫ 1 δ U ( H ( x ) − U ) ∏ i = 1 F d q i d p i {\displaystyle \Omega (U)={\frac {1}{h_{0}^{\mathcal {F}}}}\int \mathbf {1} _{\delta U}(H(x)-U)\prod _{i=1}^{\mathcal {F}}dq_{i}dp_{i}} where 1 δ U ( H ( x ) − U ) {\textstyle \mathbf {1} _{\delta U}(H(x)-U)} is an Indicator function. It is 1 if the Hamilton function H(x) at the point x = (q,p) in phase space is between U and U+ δU and 0 if not. The constant 1 / h 0 F {\textstyle {1}/{h_{0}^{\mathcal {F}}}} makes Ω(U) dimensionless. For an ideal gas is Ω ( U ) ∝ F U F 2 − 1 δ U {\displaystyle \Omega (U)\propto {\mathcal {F}}U^{{\frac {\mathcal {F}}{2}}-1}\delta U} . In this description, the particles are distinguishable. If the position and momentum of two particles are exchanged, the new state will be represented by a different point in phase space. In this case a single point will represent a microstate. If a subset of M particles are indistinguishable from each other, then the M! possible permutations or possible exchanges of these particles will be counted as part of a single microstate. The set of possible microstates are also reflected in the constraints upon the thermodynamic system. For example, in the case of a simple gas of N particles with total energy U contained in a cube of volume V, in which a sample of the gas cannot be distinguished from any other sample by experimental means, a microstate will consist of the above-mentioned N! points in phase space, and the set of microstates will be constrained to have all position coordinates to lie inside the box, and the momenta to lie on a hyperspherical surface in momentum coordinates of radius U. If on the other hand, the system consists of a mixture of two different gases, samples of which can be distinguished from each other, say A and B, then the number of microstates is increased, since two points in which an A and B particle are exchanged in phase space are no longer part of the same microstate. Two particles that are identical may nevertheless be distinguishable based on, for example, their location. (See configurational entropy.) If the box contains identical particles, and is at equilibrium, and a partition is inserted, dividing the volume in half, particles in one box are now distinguishable from those in the second box. In phase space, the N/2 particles in each box are now restricted to a volume V/2, and their energy restricted to U/2, and the number of points describing a single microstate will change: the phase space description is not the same. This has implications in both the Gibbs paradox and correct Boltzmann counting. With regard to Boltzmann counting, it is the multiplicity of points in phase space which effectively reduces the number of microstates and renders the entropy extensive. With regard to Gibbs paradox, the important result is that the increase in the number of microstates (and thus the increase in entropy) resulting from the insertion of the partition is exactly matched by the decrease in the number of microstates (and thus the decrease in entropy) resulting from the reduction in volume available to each particle, yielding a net entropy change of zero. == See also == Quantum statistical mechanics Degrees of freedom (physics and chemistry) Ergodic hypothesis Phase space == References == == External links == Some illustrations of microstates vs. macrostates
Wikipedia/Microstate_(statistical_mechanics)
In nuclear physics and particle physics, the strong interaction, also called the strong force or strong nuclear force, is one of the four known fundamental interactions. It confines quarks into protons, neutrons, and other hadron particles, and also binds neutrons and protons to create atomic nuclei, where it is called the nuclear force. Most of the mass of a proton or neutron is the result of the strong interaction energy; the individual quarks provide only about 1% of the mass of a proton. At the range of 10−15 m (1 femtometer, slightly more than the radius of a nucleon), the strong force is approximately 100 times as strong as electromagnetism, 106 times as strong as the weak interaction, and 1038 times as strong as gravitation. In the context of atomic nuclei, the force binds protons and neutrons together to form a nucleus and is called the nuclear force (or residual strong force). Because the force is mediated by massive, short lived mesons on this scale, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from when it is acting to bind quarks within hadrons. There are also differences in the binding energies of the nuclear force with regard to nuclear fusion versus nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes, although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb. == History == Before 1971, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon. A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus. In 1964, Murray Gell-Mann, and separately George Zweig, proposed that baryons, which include protons and neutrons, and mesons were composed of elementary particles. Zweig called the elementary particles "aces" while Gell-Mann called them "quarks"; the theory came to be called the quark model. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediates this was called the gluon. == Behavior of the strong interaction == The strong interaction is observable at two ranges, and mediated by different force carriers in each one. On a scale less than about 0.8 fm (roughly the radius of a nucleon), the force is carried by gluons and holds quarks together to form protons, neutrons, and other hadrons. On a larger scale, up to about 3 fm, the force is carried by mesons and binds nucleons (protons and neutrons) together to form the nucleus of an atom. In the former context, it is often known as the color force, and is so strong that if hadrons are struck by high-energy particles, they produce jets of massive particles instead of emitting their constituents (quarks and gluons) as freely moving particles. This property of the strong force is called color confinement. === Within hadrons === The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10−15 m, its strength is around 100 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation. The strong force is described by quantum chromodynamics (QCD), a part of the Standard Model of particle physics. Mathematically, QCD is a non-abelian gauge theory based on a local (gauge) symmetry group called SU(3). The force carrier particle of the strong interaction is the gluon, a massless gauge boson. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, and ±blue) rather than one, which results in different rules of behavior. These rules are described by quantum chromodynamics (QCD), the theory of quark–gluon interactions. Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles. All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant. This strength is modified by the gauge color charge of the particle, a group-theoretical property. The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10000 N, no matter how much farther the distance between the quarks.: 164  As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to isolate quarks. The explanation is that the amount of work done against a force of 10000 N is enough to create particle–antiparticle pairs within a very short distance. The energy added to the system by pulling two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement; as a result, only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon. The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass–energy equivalence, when sufficient energy is deposited into a quark–quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed. === Between hadrons === While color confinement implies that the strong force acts without distance-diminishment between pairs of quarks in compact collections of bound quarks (hadrons), at distances approaching or greater than the radius of a proton, a residual force (described below) remains. It manifests as a force between the "colorless" hadrons, and is known as the nuclear force or residual strong force (and historically as the strong nuclear force). The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together. The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms. Unlike the strong force, the residual strong force diminishes with distance, and does so rapidly. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. The rapid decrease with distance of the attractive residual force and the less rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead). Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission. == Unification == The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into electroweak interaction. The strong interaction has a property called asymptotic freedom, wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy. However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics. If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this. == See also == Mathematical formulation of quantum mechanics Mathematical formulation of the Standard Model Nuclear binding energy QCD matter Quantum field theory Yukawa interaction == References == == Further reading == Christman, J.R. (2001). "MISN-0-280: The Strong Interaction" (PDF). Ding, Minghui; Roberts, Craig; Schmidt, Sebastian, eds. (2024). Strong Interactions in the Standard Model: Massless Bosons to Compact Stars. MDPI. ISBN 978-3-7258-1502-9. Halzen, F.; Martin, A.D. (1984). Quarks and Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 978-0-471-88741-6. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 978-0-201-11749-3. Morris, R. (2003). The Last Sorcerers: The Path from Alchemy to the Periodic Table. Joseph Henry Press. ISBN 978-0-309-50593-2. == External links ==
Wikipedia/Strong_force
In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body. == Formulation == Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function: R → 0 : Ω → P {\displaystyle {\vec {R}}_{0}:\Omega \to P} where R → 0 {\displaystyle {\vec {R}}_{0}} is a placement vector Ω {\displaystyle \Omega } are all the points of the body P {\displaystyle P} are all the points in the space in which the body is present Most often it is a state of the body in which no forces are applied. Then given any other state of this body in which coordinates of all its points are described as R → 1 {\displaystyle {\vec {R}}_{1}} the displacement field is the difference between two body states: u → = R → 1 − R → 0 {\displaystyle {\vec {u}}={\vec {R}}_{1}-{\vec {R}}_{0}} where u → {\displaystyle {\vec {u}}} is a displacement field, which for each point of the body specifies a displacement vector. == Decomposition == The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} to a current or deformed configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} (Figure 1). A change in the configuration of a continuum body can be described by a displacement field. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement. == Displacement gradient tensor == Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications. The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration P i {\displaystyle P_{i}} and deformed configuration p i {\displaystyle p_{i}} is called the displacement vector, p i − P i {\displaystyle p_{i}-P_{i}} , denoted u i {\displaystyle u_{i}} or U i {\displaystyle U_{i}} below. === Material coordinates (Lagrangian description) === Using X {\displaystyle \mathbf {X} } in place of P i {\displaystyle P_{i}} and x {\displaystyle \mathbf {x} } in place of p i {\displaystyle p_{i}\,\!} , both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector: u ( X , t ) = u i e i {\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}} where e i {\displaystyle \mathbf {e} _{i}} are the unit vectors that define the basis of the material (body-frame) coordinate system. Expressed in terms of the material coordinates, i.e. u {\displaystyle \mathbf {u} } as a function of X {\displaystyle \mathbf {X} } , the displacement field is: u ( X , t ) = b ( t ) + x ( X , t ) − X or u i = α i J b J + x i − α i J X J {\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}} where b ( t ) {\displaystyle \mathbf {b} (t)} is the displacement vector representing rigid-body translation. The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇ X u {\displaystyle \nabla _{\mathbf {X} }\mathbf {u} \,\!} . Thus we have, ∇ X u = ∇ X x − R = F − R or ∂ u i ∂ X K = ∂ x i ∂ X K − α i K = F i K − α i K {\displaystyle \nabla _{\mathbf {X} }\mathbf {u} =\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {R} =\mathbf {F} -\mathbf {R} \qquad {\text{or}}\qquad {\frac {\partial u_{i}}{\partial X_{K}}}={\frac {\partial x_{i}}{\partial X_{K}}}-\alpha _{iK}=F_{iK}-\alpha _{iK}} where F {\displaystyle \mathbf {F} } is the material deformation gradient tensor and R {\displaystyle \mathbf {R} } is a rotation. === Spatial coordinates (Eulerian description) === In the Eulerian description, the vector extending from a particle P {\displaystyle P} in the undeformed configuration to its location in the deformed configuration is called the displacement vector: U ( x , t ) = U J E J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=U_{J}\mathbf {E} _{J}} where E i {\displaystyle \mathbf {E} _{i}} are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system. Expressed in terms of spatial coordinates, i.e. U {\displaystyle \mathbf {U} } as a function of x {\displaystyle \mathbf {x} } , the displacement field is: U ( x , t ) = b ( t ) + x − X ( x , t ) or U J = b J + α J i x i − X J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}} The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor ∇ x U {\displaystyle \nabla _{\mathbf {x} }\mathbf {U} \,\!} . Thus we have, ∇ x U = R T − ∇ x X = R T − F − 1 or ∂ U J ∂ x k = α J k − ∂ X J ∂ x k = α J k − F J k − 1 , {\displaystyle \nabla _{\mathbf {x} }\mathbf {U} =\mathbf {R} ^{T}-\nabla _{\mathbf {x} }\mathbf {X} =\mathbf {R} ^{T}-\mathbf {F} ^{-1}\qquad {\text{or}}\qquad {\frac {\partial U_{J}}{\partial x_{k}}}=\alpha _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}=\alpha _{Jk}-F_{Jk}^{-1}\,,} where F − 1 = H {\displaystyle \mathbf {F} ^{-1}=\mathbf {H} } is the spatial deformation gradient tensor. === Relationship between the material and spatial coordinate systems === α J i {\displaystyle \alpha _{Ji}} are the direction cosines between the material and spatial coordinate systems with unit vectors E J {\displaystyle \mathbf {E} _{J}} and e i {\displaystyle \mathbf {e} _{i}\,\!} , respectively. Thus E J ⋅ e i = α J i = α i J {\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}} The relationship between u i {\displaystyle u_{i}} and U J {\displaystyle U_{J}} is then given by u i = α i J U J or U J = α J i u i {\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}} Knowing that e i = α i J E J {\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}} then u ( X , t ) = u i e i = u i ( α i J E J ) = U J E J = U ( x , t ) {\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)} === Combining the coordinate systems of deformed and undeformed configurations === It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in b = 0 {\displaystyle \mathbf {b} =0\,\!} , and the direction cosines become Kronecker deltas, i.e., E J ⋅ e i = δ J i = δ i J {\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}} Thus in material (undeformed) coordinates, the displacement may be expressed as: u ( X , t ) = x ( X , t ) − X or u i = x i − δ i J X J {\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}} And in spatial (deformed) coordinates, the displacement may be expressed as: U ( x , t ) = x − X ( x , t ) or U J = δ J i x i − X J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}} == See also == Stress Strain == References ==
Wikipedia/Displacement_field_(mechanics)
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle. The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel: It follows therefore from these three tests, that the repulsive force that the two balls – [that were] electrified with the same kind of electricity – exert on each other, follows the inverse proportion of the square of the distance. Coulomb also showed that oppositely charged bodies attract according to an inverse-square law: | F | = k e | q 1 | | q 2 | r 2 {\displaystyle |F|=k_{\text{e}}{\frac {|q_{1}||q_{2}|}{r^{2}}}} Here, ke is a constant, q1 and q2 are the quantities of each charge, and the scalar r is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract. Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m. == History == Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects. In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758. Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as x−2.06. In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the 2 + ⁠1/50⁠th and that of the 2 − ⁠1/50⁠th, and there is no reason to think that it differs at all from the inverse duplicate ratio". Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. == Mathematical form == Coulomb's law states that the electrostatic force F 1 {\textstyle \mathbf {F} _{1}} experienced by a charge, q 1 {\displaystyle q_{1}} at position r 1 {\displaystyle \mathbf {r} _{1}} , in the vicinity of another charge, q 2 {\displaystyle q_{2}} at position r 2 {\displaystyle \mathbf {r} _{2}} , in a vacuum is equal to F 1 = q 1 q 2 4 π ε 0 r ^ 12 | r 12 | 2 {\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}} where r 12 = r 1 − r 2 {\textstyle \mathbf {r_{12}=r_{1}-r_{2}} } is the displacement vector between the charges, r ^ 12 {\textstyle {\hat {\mathbf {r} }}_{12}} a unit vector pointing from q 2 {\textstyle q_{2}} to q 1 {\textstyle q_{1}} , and ε 0 {\displaystyle \varepsilon _{0}} the electric constant. Here, r ^ 12 {\textstyle \mathbf {\hat {r}} _{12}} is used for the vector notation. The electrostatic force F 2 {\textstyle \mathbf {F} _{2}} experienced by q 2 {\displaystyle q_{2}} , according to Newton's third law, is F 2 = − F 1 {\textstyle \mathbf {F} _{2}=-\mathbf {F} _{1}} . If both charges have the same sign (like charges) then the product q 1 q 2 {\displaystyle q_{1}q_{2}} is positive and the direction of the force on q 1 {\displaystyle q_{1}} is given by r ^ 12 {\textstyle {\widehat {\mathbf {r} }}_{12}} ; the charges repel each other. If the charges have opposite signs then the product q 1 q 2 {\displaystyle q_{1}q_{2}} is negative and the direction of the force on q 1 {\displaystyle q_{1}} is − r ^ 12 {\textstyle -{\hat {\mathbf {r} }}_{12}} ; the charges attract each other. === System of discrete charges === The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. Force F {\textstyle \mathbf {F} } on a small charge q {\displaystyle q} at position r {\displaystyle \mathbf {r} } , due to a system of n {\textstyle n} discrete charges in vacuum is F ( r ) = q 4 π ε 0 ∑ i = 1 n q i r ^ i | r i | 2 , {\displaystyle \mathbf {F} (\mathbf {r} )={q \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}},} where q i {\displaystyle q_{i}} is the magnitude of the ith charge, r i {\textstyle \mathbf {r} _{i}} is the vector from its position to r {\displaystyle \mathbf {r} } and r ^ i {\textstyle {\hat {\mathbf {r} }}_{i}} is the unit vector in the direction of r i {\displaystyle \mathbf {r} _{i}} . === Continuous charge distribution === In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge d q {\displaystyle dq} . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where λ ( r ′ ) {\displaystyle \lambda (\mathbf {r} ')} gives the charge per unit length at position r ′ {\displaystyle \mathbf {r} '} , and d ℓ ′ {\displaystyle d\ell '} is an infinitesimal element of length, d q ′ = λ ( r ′ ) d ℓ ′ . {\displaystyle dq'=\lambda (\mathbf {r'} )\,d\ell '.} For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where σ ( r ′ ) {\displaystyle \sigma (\mathbf {r} ')} gives the charge per unit area at position r ′ {\displaystyle \mathbf {r} '} , and d A ′ {\displaystyle dA'} is an infinitesimal element of area, d q ′ = σ ( r ′ ) d A ′ . {\displaystyle dq'=\sigma (\mathbf {r'} )\,dA'.} For a volume charge distribution (such as charge within a bulk metal) where ρ ( r ′ ) {\displaystyle \rho (\mathbf {r} ')} gives the charge per unit volume at position r ′ {\displaystyle \mathbf {r} '} , and d V ′ {\displaystyle dV'} is an infinitesimal element of volume, d q ′ = ρ ( r ′ ) d V ′ . {\displaystyle dq'=\rho ({\boldsymbol {r'}})\,dV'.} The force on a small test charge q {\displaystyle q} at position r {\displaystyle {\boldsymbol {r}}} in vacuum is given by the integral over the distribution of charge F ( r ) = q 4 π ε 0 ∫ d q ′ r − r ′ | r − r ′ | 3 . {\displaystyle \mathbf {F} (\mathbf {r} )={\frac {q}{4\pi \varepsilon _{0}}}\int dq'{\frac {\mathbf {r} -\mathbf {r'} }{|\mathbf {r} -\mathbf {r'} |^{3}}}.} The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which | r − r ′ | = 0 {\displaystyle |\mathbf {r} -\mathbf {r'} |=0} because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow | r − r ′ | = 0 {\displaystyle |\mathbf {r} -\mathbf {r'} |=0} to be analyzed. == Coulomb constant == The constant of proportionality, 1 4 π ε 0 {\displaystyle {\frac {1}{4\pi \varepsilon _{0}}}} , in Coulomb's law: F 1 = q 1 q 2 4 π ε 0 r ^ 12 | r 12 | 2 {\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}} is a consequence of historical choices for units.: 4–2  The constant ε 0 {\displaystyle \varepsilon _{0}} is the vacuum electric permittivity. Using the CODATA 2022 recommended value for ε 0 {\displaystyle \varepsilon _{0}} , the Coulomb constant is k e = 1 4 π ε 0 = 8.987 551 7862 ( 14 ) × 10 9 N ⋅ m 2 ⋅ C − 2 {\displaystyle k_{\text{e}}={\frac {1}{4\pi \varepsilon _{0}}}=8.987\ 551\ 7862(14)\times 10^{9}\ \mathrm {N{\cdot }m^{2}{\cdot }C^{-2}} } == Limitations == There are three conditions to be fulfilled for the validity of Coulomb's inverse square law: The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere). The charges must not overlap (e.g. they must be distinct point charges). The charges must be stationary with respect to a nonaccelerating frame of reference. The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration. == Electric field == An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force F {\textstyle \mathbf {F} } on a charge q t {\textstyle q_{t}} depends on the electric field E {\textstyle \mathbf {E} } established by other charges that it finds itself in, such that F = q t E {\textstyle \mathbf {F} =q_{t}\mathbf {E} } . In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition. If the field is generated by a positive source point charge q {\textstyle q} , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge q t {\textstyle q_{t}} would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field E can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field E created by a single source point charge Q at a certain distance from it r in vacuum is given by | E | = k e | q | r 2 {\displaystyle |\mathbf {E} |=k_{\text{e}}{\frac {|q|}{r^{2}}}} A system of n discrete charges q i {\displaystyle q_{i}} stationed at r i = r − r i {\textstyle \mathbf {r} _{i}=\mathbf {r} -\mathbf {r} _{i}} produces an electric field whose magnitude and direction is, by superposition E ( r ) = 1 4 π ε 0 ∑ i = 1 n q i r ^ i | r i | 2 {\displaystyle \mathbf {E} (\mathbf {r} )={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}}} == Atomic forces == Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. == Relation to Gauss's law == === Deriving Gauss's law from Coulomb's law === === Deriving Coulomb's law from Gauss's law === Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). == In relativity == Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by: E = q 4 π ϵ 0 r 3 1 − β 2 ( 1 − β 2 sin 2 ⁡ θ ) 3 / 2 r {\displaystyle \mathbf {E} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}\mathbf {r} } B = q 4 π ϵ 0 r 3 1 − β 2 ( 1 − β 2 sin 2 ⁡ θ ) 3 / 2 v × r c 2 = v × E c 2 {\displaystyle \mathbf {B} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}{\frac {\mathbf {v} \times \mathbf {r} }{c^{2}}}={\frac {\mathbf {v} \times \mathbf {E} }{c^{2}}}} where q {\displaystyle q} is the charge of the point source, r {\displaystyle \mathbf {r} } is the position vector from the point source to the point in space, v {\displaystyle \mathbf {v} } is the velocity vector of the charged particle, β {\displaystyle \beta } is the ratio of speed of the charged particle divided by the speed of light and θ {\displaystyle \theta } is the angle between r {\displaystyle \mathbf {r} } and v {\displaystyle \mathbf {v} } . This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating β ≪ 1 {\displaystyle \beta \ll 1} ) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations. == Coulomb potential == === Quantum field theory === The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude A ( | p ⟩ → | p ′ ⟩ ) {\textstyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )} is: A ( | p ⟩ → | p ′ ⟩ ) − 1 = 2 π δ ( E p − E p ′ ) ( − i ) ∫ d 3 r V ( r ) e − i ( p − p ′ ) r {\displaystyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )-1=2\pi \delta (E_{p}-E_{p'})(-i)\int d^{3}\mathbf {r} \,V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }} This is to be compared to the: ∫ d 3 k ( 2 π ) 3 e i k r 0 ⟨ p ′ , k | S | p , k ⟩ {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}e^{ikr_{0}}\langle p',k|S|p,k\rangle } where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with m 0 ≫ | p | {\displaystyle m_{0}\gg |\mathbf {p} |} ⟨ p ′ , k | S | p , k ⟩ | c o n n = − i e 2 | p − p ′ | 2 − i ε ( 2 m ) 2 δ ( E p , k − E p ′ , k ) ( 2 π ) 4 δ ( p − p ′ ) {\displaystyle \langle p',k|S|p,k\rangle |_{conn}=-i{\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}(2m)^{2}\delta (E_{p,k}-E_{p',k})(2\pi )^{4}\delta (\mathbf {p} -\mathbf {p} ')} Comparing with the QM scattering, we have to discard the ( 2 m ) 2 {\displaystyle (2m)^{2}} as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: ∫ V ( r ) e − i ( p − p ′ ) r d 3 r = e 2 | p − p ′ | 2 − i ε {\displaystyle \int V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }d^{3}\mathbf {r} ={\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}} where Fourier transforming both sides, solving the integral and taking ε → 0 {\displaystyle \varepsilon \to 0} at the end will yield V ( r ) = e 2 4 π r {\displaystyle V(r)={\frac {e^{2}}{4\pi r}}} as the Coulomb potential. However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental. The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass. == Verification == It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass m {\displaystyle m} and same-sign charge q {\displaystyle q} , hanging from two ropes of negligible mass of length l {\displaystyle l} . The forces acting on each sphere are three: the weight m g {\displaystyle mg} , the rope tension T {\displaystyle \mathbf {T} } and the electric force F {\displaystyle \mathbf {F} } . In the equilibrium state: and Dividing (1) by (2): Let L 1 {\displaystyle \mathbf {L} _{1}} be the distance between the charged spheres; the repulsion force between them F 1 {\displaystyle \mathbf {F} _{1}} , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge q 2 {\textstyle {\frac {q}{2}}} . In the equilibrium state, the distance between the charges will be L 2 < L 1 {\textstyle \mathbf {L} _{2}<\mathbf {L} _{1}} and the repulsion force between them will be: We know that F 2 = m g tan ⁡ θ 2 {\displaystyle \mathbf {F} _{2}=mg\tan \theta _{2}} and: q 2 4 4 π ε 0 L 2 2 = m g tan ⁡ θ 2 {\displaystyle {\frac {\frac {q^{2}}{4}}{4\pi \varepsilon _{0}L_{2}^{2}}}=mg\tan \theta _{2}} Dividing (4) by (5), we get: Measuring the angles θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} and the distance between the charges L 1 {\displaystyle \mathbf {L} _{1}} and L 2 {\displaystyle \mathbf {L} _{2}} is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: Using this approximation, the relationship (6) becomes the much simpler expression: In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value. == See also == == References == Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06 == Related reading == Coulomb, Charles Augustin (1788) [1785]. "Premier mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 569–577. Coulomb, Charles Augustin (1788) [1785]. "Second mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 578–611. Coulomb, Charles Augustin (1788) [1785]. "Troisième mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 612–638. Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0. Tamm, Igor E. (1979) [1976]. Fundamentals of the Theory of Electricity (9th ed.). Moscow: Mir. pp. 23–27. Tipler, Paul A.; Mosca, Gene (2008). Physics for Scientists and Engineers (6th ed.). New York: W. H. Freeman and Company. ISBN 978-0-7167-8964-2. LCCN 2007010418. Young, Hugh D.; Freedman, Roger A. (2010). Sears and Zemansky's University Physics: With Modern Physics (13th ed.). Addison-Wesley (Pearson). ISBN 978-0-321-69686-1. == External links == Coulomb's Law on Project PHYSNET Electricity and the Atom Archived 2009-02-21 at the Wayback Machine—a chapter from an online textbook A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike.
Wikipedia/Electrostatic_force
In nuclear physics and particle physics, the weak interaction, weak force or the weak nuclear force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion. The theory describing its behaviour and effects is sometimes called quantum flavordynamics (QFD); however, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. == Background == The Standard Model of particle physics provides a uniform framework for understanding electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either electric (e.g., electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the weak interaction, fermions can exchange three types of force carriers, namely W+, W−, and Z bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over any set distance is typically several orders of magnitude less than that of the electromagnetic force, which itself is further orders of magnitude less than the strong nuclear force. The weak interaction is the only fundamental interaction that breaks parity symmetry, and similarly, but far more rarely, the only interaction to break charge–parity symmetry. Quarks, which make up composite particles like neutrons and protons, come in six "flavours" – up, down, charm, strange, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta-minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Weak interaction is important in the fusion of hydrogen into helium in a star. This is because it can convert a proton (hydrogen) into a neutron to form deuterium which is important for the continuation of nuclear fusion to form helium. The accumulation of neutrons facilitates the buildup of heavy nuclei in a star. Most fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium luminescence, and in the related field of betavoltaics (but not similar to radium luminescence). The electroweak force is believed to have separated into the electromagnetic and weak forces during the quark epoch of the early universe. == History == In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. In the mid-1950s, Chen-Ning Yang and Tsung-Dao Lee first suggested that the handedness of the spins of particles in weak interaction might violate the conservation law or symmetry. In 1957, the Wu experiment, carried by Chien Shiung Wu and collaborators confirmed the symmetry violation. In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force. The existence of the W and Z bosons was not directly confirmed until 1983.(p8) == Properties == The electrically charged weak interaction is unique in a number of respects: It is the only interaction that can change the flavour of quarks and leptons (i.e., of changing one type of quark into another). It is the only interaction that violates P, or parity symmetry. It is also the only one that violates charge–parity (CP) symmetry. Both the electrically charged and the electrically neutral interactions are mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, called the W and Z bosons, are short-lived with a lifetime of under 10−24 seconds. The weak interaction has a coupling constant (an indicator of how frequently interactions occur) between 10−7 and 10−6, compared to the electromagnetic coupling constant of about 10−2 and the strong interaction coupling constant of about 1; consequently the weak interaction is "weak" in terms of intensity. The weak interaction has a very short effective range (around 10−17 to 10−16 m (0.01 to 0.1 fm)). At distances around 10−18 meters (0.001 fm), the weak interaction has an intensity of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3×10−17 m, the weak interaction becomes 10,000 times weaker. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states, nor does it involve binding energy – something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. Its most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon) and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour changing, so this can only proceed by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the strange quark and charm quark, respectively) would also be conserved across all interactions. All mesons are unstable because of weak decay.(p29) In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual W− boson, which then decays into an electron and an electron antineutrino.(p28) Another example is electron capture – a common variant of radioactive decay – wherein a proton and an electron within an atom interact and are changed to a neutron (an up quark is changed to a down quark), and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about 10−16 seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about 10−8 seconds, or a hundred million times longer than a neutral pion.(p30) A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes.(p28) === Weak isospin and weak hypercharge === All particles have a property called weak isospin (symbol T3), which serves as an additive quantum number that restricts how the particle can interact with the W± of the weak force. Weak isospin plays the same role in the weak interaction with W± as electric charge does in electromagnetism, and color charge in the strong interaction; a different number with a similar name, weak charge, discussed below, is used for interactions with the Z0. All left-handed fermions have a weak isospin value of either ⁠++1/2⁠ or ⁠−+1/2⁠; all right-handed fermions have 0 isospin. For example, the up quark has T3 = ⁠++1/2⁠ and the down quark has T3 = ⁠−+1/2⁠. A quark never decays through the weak interaction into a quark of the same T3: Quarks with a T3 of ⁠++1/2⁠ only decay into quarks with a T3 of ⁠−+1/2⁠ and conversely. In any given strong, electromagnetic, or weak interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) π+, with a weak isospin of +1 normally decays into a νμ (with T3 = ⁠++1/2⁠) and a μ+ (as a right-handed antiparticle, ⁠++1/2⁠).(p30) For the development of the electroweak theory, another property, weak hypercharge, was invented, defined as Y W = 2 ( Q − T 3 ) , {\displaystyle Y_{\text{W}}=2\,(Q-T_{3}),} where YW is the weak hypercharge of a particle with electrical charge Q (in elementary charge units) and weak isospin T3. Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin-⁠1/2⁠ particles have a non-zero weak hypercharge. == Interaction types == There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the "neutral-current interaction" because the weakly interacting fermions form a current with total electric charge of zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the W and Z bosons, however the naming convention predates the concept of the mediator bosons, and clearly (at least in name) labels the charge of the current (formed from the fermions), not necessarily the bosons. === Charged-current interaction === In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a W+ boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron νe, muon νμ, or tau ντ) is the same as the type of lepton in the interaction, for example: μ − + W + → ν μ {\displaystyle \mu ^{-}+\mathrm {W} ^{+}\to \nu _{\mu }} Similarly, a down-type quark (d, s, or b, with a charge of ⁠−+ 1 /3⁠) can be converted into an up-type quark (u, c, or t, with a charge of ⁠++ 2 /3⁠), by emitting a W− boson or by absorbing a W+ boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a W+ boson, or absorb a W− boson, and thereby be converted into a down-type quark, for example: d → u + W − d + W + → u c → s + W + c + W − → s {\displaystyle {\begin{aligned}\mathrm {d} &\to \mathrm {u} +\mathrm {W} ^{-}\\\mathrm {d} +\mathrm {W} ^{+}&\to \mathrm {u} \\\mathrm {c} &\to \mathrm {s} +\mathrm {W} ^{+}\\\mathrm {c} +\mathrm {W} ^{-}&\to \mathrm {s} \end{aligned}}} The W boson is unstable so will rapidly decay, with a very short lifetime. For example: W − → e − + ν ¯ e W + → e + + ν e {\displaystyle {\begin{aligned}\mathrm {W} ^{-}&\to \mathrm {e} ^{-}+{\bar {\nu }}_{\mathrm {e} }~\\\mathrm {W} ^{+}&\to \mathrm {e} ^{+}+\nu _{\mathrm {e} }~\end{aligned}}} Decay of a W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual W− boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual W− boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products. At the quark level, the process can be represented as: d → u + e − + ν ¯ e {\displaystyle \mathrm {d} \to \mathrm {u} +\mathrm {e} ^{-}+{\bar {\nu }}_{\mathrm {e} }~} === Neutral-current interaction === In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral Z boson. For example: e − → e − + Z 0 {\displaystyle \mathrm {e} ^{-}\to \mathrm {e} ^{-}+\mathrm {Z} ^{0}} Like the W± bosons, the Z0 boson also decays rapidly, for example: Z 0 → b + b ¯ {\displaystyle \mathrm {Z} ^{0}\to \mathrm {b} +{\bar {\mathrm {b} }}} Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, and / or weak isospin, the neutral-current Z0 interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs. The quantum number weak charge (QW) serves the same role in the neutral current interaction with the Z0 that electric charge (Q, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by: Q w = 2 T 3 − 4 Q sin 2 ⁡ θ w = 2 T 3 − Q + ( 1 − 4 sin 2 ⁡ θ w ) Q . {\displaystyle Q_{\mathsf {w}}=2\,T_{3}-4\,Q\,\sin ^{2}\theta _{\mathsf {w}}=2\,T_{3}-Q+(1-4\,\sin ^{2}\theta _{\mathsf {w}})\,Q~.} Since the weak mixing angle ⁠ θ w ≈ 29 ∘ {\displaystyle \theta _{\mathsf {w}}\approx 29^{\circ }} ⁠, the parenthetic expression ⁠ ( 1 − 4 sin 2 ⁡ θ w ) ≈ 0.060 {\displaystyle (1-4\,\sin ^{2}\theta _{\mathsf {w}})\approx 0.060} ⁠, with its value varying slightly with the momentum difference (called "running") between the particles involved. Hence Q w ≈ 2 T 3 − Q = sgn ⁡ ( Q ) ( 1 − | Q | ) , {\displaystyle \ Q_{\mathsf {w}}\approx 2\ T_{3}-Q=\operatorname {sgn}(Q)\ {\big (}1-|Q|{\big )}\ ,} since by convention ⁠ sgn ⁡ T 3 ≡ sgn ⁡ Q {\displaystyle \operatorname {sgn} T_{3}\equiv \operatorname {sgn} Q} ⁠, and for all fermions involved in the weak interaction ⁠ T 3 = ± 1 2 {\displaystyle T_{3}=\pm {\tfrac {1}{2}}} ⁠. The weak charge of charged leptons is then close to zero, so these mostly interact with the Z boson through the axial coupling. == Electroweak theory == The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (W+, W−, Z0, the three carriers of the weak interaction), and the photon (γ, the massless gauge boson that carries the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless scalar bosons forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak vector bosons, each similar to the photon. However, at low energies, this gauge symmetry is spontaneously broken down to the U(1) symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those "extra" three Higgs bosons become incorporated into the three weak bosons, which then acquire mass through the Higgs mechanism. These three composite bosons are the W+, W−, and Z0 bosons actually observed in the weak interaction. The fourth electroweak gauge boson is the photon (γ) of electromagnetism, which does not couple to any of the Higgs fields and so remains massless. This theory has made a number of predictions, including a prediction of the masses of the Z and W bosons before their discovery and detection in 1983. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and 127 GeV/c2, whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist. In a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken SU(2) interaction would eventually become confining. Alternative models where SU(2) becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. == Violation of symmetry == The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V − A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V − A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis. == See also == Weakless universe – the postulate that weak interactions are not anthropically necessary Gravity Strong interaction Electromagnetism == Footnotes == == References == == Sources == === Technical === === For general readers === == External links == Harry Cheung, The Weak Force @Fermilab Fundamental Forces @Hyperphysics, Georgia State University. Brian Koberlein, What is the weak force?
Wikipedia/Weak_force
In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean), like the expected value from statistics. It is a fundamental concept in all areas of quantum physics. == Operational definition == Consider an operator A {\displaystyle A} . The expectation value is then ⟨ A ⟩ = ⟨ ψ | A | ψ ⟩ {\displaystyle \langle A\rangle =\langle \psi |A|\psi \rangle } in Dirac notation with | ψ ⟩ {\displaystyle |\psi \rangle } a normalized state vector. == Formalism in quantum mechanics == In quantum theory, an experimental setup is described by the observable A {\displaystyle A} to be measured, and the state σ {\displaystyle \sigma } of the system. The expectation value of A {\displaystyle A} in the state σ {\displaystyle \sigma } is denoted as ⟨ A ⟩ σ {\displaystyle \langle A\rangle _{\sigma }} . Mathematically, A {\displaystyle A} is a self-adjoint operator on a separable complex Hilbert space. In the most commonly used case in quantum mechanics, σ {\displaystyle \sigma } is a pure state, described by a normalized vector ψ {\displaystyle \psi } in the Hilbert space. The expectation value of A {\displaystyle A} in the state ψ {\displaystyle \psi } is defined as If dynamics is considered, either the vector ψ {\displaystyle \psi } or the operator A {\displaystyle A} is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however. If A {\displaystyle A} has a complete set of eigenvectors ϕ j {\displaystyle \phi _{j}} , with eigenvalues a j {\displaystyle a_{j}} so that A = ∑ j a j | ϕ j ⟩ ⟨ ϕ j | , {\displaystyle A=\sum _{j}a_{j}|\phi _{j}\rangle \langle \phi _{j}|,} then (1) can be expressed as This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues a j {\displaystyle a_{j}} are the possible outcomes of the experiment, and their corresponding coefficient | ⟨ ψ | ϕ j ⟩ | 2 {\displaystyle |\langle \psi |\phi _{j}\rangle |^{2}} is the probability that this outcome will occur; it is often called the transition probability. A particularly simple case arises when A {\displaystyle A} is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator X {\displaystyle X} in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter, x {\displaystyle x} . Specifically, the operator X {\displaystyle X} acts on a spatial vector | x ⟩ {\displaystyle |x\rangle } as X | x ⟩ = x | x ⟩ {\displaystyle X|x\rangle =x|x\rangle } . In this case, the vector ψ {\displaystyle \psi } can be written as a complex-valued function ψ ( x ) {\displaystyle \psi (x)} on the spectrum of X {\displaystyle X} (usually the real line). This is formally achieved by projecting the state vector | ψ ⟩ {\displaystyle |\psi \rangle } onto the eigenvalues of the operator, as in the discrete case ψ ( x ) ≡ ⟨ x | ψ ⟩ {\textstyle \psi (x)\equiv \langle x|\psi \rangle } . It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a completeness relation in quantum mechanics: ∫ | x ⟩ ⟨ x | d x ≡ I {\displaystyle \int |x\rangle \langle x|\,dx\equiv \mathbb {I} } The above may be used to derive the common, integral expression for the expected value (4), by inserting identities into the vector expression of expected value, then expanding in the position basis: ⟨ X ⟩ ψ = ⟨ ψ | X | ψ ⟩ = ⟨ ψ | I X I | ψ ⟩ = ∬ ⟨ ψ | x ⟩ ⟨ x | X | x ′ ⟩ ⟨ x ′ | ψ ⟩ d x d x ′ = ∬ ⟨ x | ψ ⟩ ∗ x ′ ⟨ x | x ′ ⟩ ⟨ x ′ | ψ ⟩ d x d x ′ = ∬ ⟨ x | ψ ⟩ ∗ x ′ δ ( x − x ′ ) ⟨ x ′ | ψ ⟩ d x d x ′ = ∫ ψ ( x ) ∗ x ψ ( x ) d x = ∫ x ψ ( x ) ∗ ψ ( x ) d x = ∫ x | ψ ( x ) | 2 d x {\displaystyle {\begin{aligned}\langle X\rangle _{\psi }&=\langle \psi |X|\psi \rangle =\langle \psi |\mathbb {I} X\mathbb {I} |\psi \rangle \\&=\iint \langle \psi |x\rangle \langle x|X|x'\rangle \langle x'|\psi \rangle dx\ dx'\\&=\iint \langle x|\psi \rangle ^{*}x'\langle x|x'\rangle \langle x'|\psi \rangle dx\ dx'\\&=\iint \langle x|\psi \rangle ^{*}x'\delta (x-x')\langle x'|\psi \rangle dx\ dx'\\&=\int \psi (x)^{*}x\psi (x)dx=\int x\psi (x)^{*}\psi (x)dx=\int x|\psi (x)|^{2}dx\end{aligned}}} Where the orthonormality relation of the position basis vectors ⟨ x | x ′ ⟩ = δ ( x − x ′ ) {\displaystyle \langle x|x'\rangle =\delta (x-x')} , reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace ψ ∗ ψ {\displaystyle \psi ^{*}\psi } with | ψ | 2 {\displaystyle |\psi |^{2}} , which is a common substitution in quantum-mechanical integrals. The expectation value may then be stated, where x is unbounded, as the formula A similar formula holds for the momentum operator, in systems where it has continuous spectrum. All the above formulas are valid for pure states σ {\displaystyle \sigma } only. Prominently in thermodynamics and quantum optics, also mixed states are of importance; these are described by a positive trace-class operator ρ = ∑ i p i | ψ i ⟩ ⟨ ψ i | {\textstyle \rho =\sum _{i}p_{i}|\psi _{i}\rangle \langle \psi _{i}|} , the statistical operator or density matrix. The expectation value then can be obtained as == General formulation == In general, quantum states σ {\displaystyle \sigma } are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C*-algebra. The expectation value of an observable A {\displaystyle A} is then given by If the algebra of observables acts irreducibly on a Hilbert space, and if σ {\displaystyle \sigma } is a normal functional, that is, it is continuous in the ultraweak topology, then it can be written as σ ( ⋅ ) = Tr ⁡ ( ρ ⋅ ) {\displaystyle \sigma (\cdot )=\operatorname {Tr} (\rho \;\cdot )} with a positive trace-class operator ρ {\displaystyle \rho } of trace 1. This gives formula (5) above. In the case of a pure state, ρ = | ψ ⟩ ⟨ ψ | {\displaystyle \rho =|\psi \rangle \langle \psi |} is a projection onto a unit vector ψ {\displaystyle \psi } . Then σ = ⟨ ψ | ⋅ ψ ⟩ {\displaystyle \sigma =\langle \psi |\cdot \;\psi \rangle } , which gives formula (1) above. A {\displaystyle A} is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write A {\displaystyle A} in a spectral decomposition, A = ∫ a d P ( a ) {\displaystyle A=\int a\,dP(a)} with a projection-valued measure P {\displaystyle P} . For the expectation value of A {\displaystyle A} in a pure state σ = ⟨ ψ | ⋅ ψ ⟩ {\displaystyle \sigma =\langle \psi |\cdot \,\psi \rangle } , this means ⟨ A ⟩ σ = ∫ a d ⟨ ψ | P ( a ) ψ ⟩ , {\displaystyle \langle A\rangle _{\sigma }=\int a\;d\langle \psi |P(a)\psi \rangle ,} which may be seen as a common generalization of formulas (2) and (4) above. In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula (6). == Example in configuration space == As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is H = L 2 ( R ) {\displaystyle {\mathcal {H}}=L^{2}(\mathbb {R} )} , the space of square-integrable functions on the real line. Vectors ψ ∈ H {\displaystyle \psi \in {\mathcal {H}}} are represented by functions ψ ( x ) {\displaystyle \psi (x)} , called wave functions. The scalar product is given by ⟨ ψ 1 | ψ 2 ⟩ = ∫ ψ 1 ∗ ( x ) ψ 2 ( x ) d x {\textstyle \langle \psi _{1}|\psi _{2}\rangle =\int \psi _{1}^{\ast }(x)\psi _{2}(x)\,dx} . The wave functions have a direct interpretation as a probability distribution: ρ ( x ) d x = ψ ∗ ( x ) ψ ( x ) d x {\displaystyle \rho (x)dx=\psi ^{*}(x)\psi (x)dx} gives the probability of finding the particle in an infinitesimal interval of length d x {\displaystyle dx} about some point x {\displaystyle x} . As an observable, consider the position operator Q {\displaystyle Q} , which acts on wavefunctions ψ {\displaystyle \psi } by ( Q ψ ) ( x ) = x ψ ( x ) . {\displaystyle (Q\psi )(x)=x\psi (x).} The expectation value, or mean value of measurements, of Q {\displaystyle Q} performed on a very large number of identical independent systems will be given by ⟨ Q ⟩ ψ = ⟨ ψ | Q | ψ ⟩ = ∫ − ∞ ∞ ψ ∗ ( x ) x ψ ( x ) d x = ∫ − ∞ ∞ x ρ ( x ) d x . {\displaystyle \langle Q\rangle _{\psi }=\langle \psi |Q|\psi \rangle =\int _{-\infty }^{\infty }\psi ^{\ast }(x)\,x\,\psi (x)\,dx=\int _{-\infty }^{\infty }x\,\rho (x)\,dx.} The expectation value only exists if the integral converges, which is not the case for all vectors ψ {\displaystyle \psi } . This is because the position operator is unbounded, and ψ {\displaystyle \psi } has to be chosen from its domain of definition. In general, the expectation of any observable can be calculated by replacing Q {\displaystyle Q} with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator in configuration space, p = − i ℏ d d x {\textstyle \mathbf {p} =-i\hbar \,{\frac {d}{dx}}} . Explicitly, its expectation value is ⟨ p ⟩ ψ = − i ℏ ∫ − ∞ ∞ ψ ∗ ( x ) d ψ ( x ) d x d x . {\displaystyle \langle \mathbf {p} \rangle _{\psi }=-i\hbar \int _{-\infty }^{\infty }\psi ^{\ast }(x)\,{\frac {d\psi (x)}{dx}}\,dx.} Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment. == See also == Rayleigh quotient Uncertainty principle Virial theorem == Notes == == References == == Further reading == The expectation value, in particular as presented in the section "Formalism in quantum mechanics", is covered in most elementary textbooks on quantum mechanics. For a discussion of conceptual aspects, see: Isham, Chris J (1995). Lectures on Quantum Theory: Mathematical and Structural Foundations. Imperial College Press. ISBN 978-1-86094-001-9.
Wikipedia/Expectation_value_(quantum_mechanics)
When a fluid flows around an object, the fluid exerts a force on the object. Lift is the component of this force that is perpendicular to the oncoming flow direction. It contrasts with the drag force, which is the component of the force parallel to the flow direction. Lift conventionally acts in an upward direction in order to counter the force of gravity, but it is defined to act perpendicular to the flow and therefore can act in any direction. If the surrounding fluid is air, the force is called an aerodynamic force. In water or any other liquid, it is called a hydrodynamic force. Dynamic lift is distinguished from other kinds of lift in fluids. Aerostatic lift or buoyancy, in which an internal fluid is lighter than the surrounding fluid, does not require movement and is used by balloons, blimps, dirigibles, boats, and submarines. Planing lift, in which only the lower portion of the body is immersed in a liquid flow, is used by motorboats, surfboards, windsurfers, sailboats, and water-skis. == Overview == A fluid flowing around the surface of a solid object applies a force on it. It does not matter whether the object is moving through a stationary fluid (e.g. an aircraft flying through the air) or whether the object is stationary and the fluid is moving (e.g. a wing in a wind tunnel) or whether both are moving (e.g. a sailboat using the wind to move forward). Lift is the component of this force that is perpendicular to the oncoming flow direction. Lift is always accompanied by a drag force, which is the component of the surface force parallel to the flow direction. Lift is mostly associated with the wings of fixed-wing aircraft, although it is more widely generated by many other streamlined bodies such as propellers, kites, helicopter rotors, racing car wings, maritime sails, wind turbines, and by sailboat keels, ship's rudders, and hydrofoils in water. Lift is also used by flying and gliding animals, especially by birds, bats, and insects, and even in the plant world by the seeds of certain trees. While the common meaning of the word "lift" assumes that lift opposes weight, lift can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is cruising in straight and level flight, the lift opposes gravity. However, when an aircraft is climbing, descending, or banking in a turn the lift is tilted with respect to the vertical. Lift may also act as downforce on the wing of a fixed-wing aircraft at the top of an aerobatic loop, and on the horizontal stabiliser of an aircraft. Lift may also be largely horizontal, for instance on a sailing ship. The lift discussed in this article is mainly in relation to airfoils; marine hydrofoils and propellers share the same physical principles and work in the same way, despite differences between air and water such as density, compressibility, and viscosity. The flow around a lifting airfoil is a fluid mechanics phenomenon that can be understood on essentially two levels: There are mathematical theories, which are based on established laws of physics and represent the flow accurately, but which require solving equations. And there are physical explanations without math, which are less rigorous. Correctly explaining lift in these qualitative terms is difficult because the cause-and-effect relationships involved are subtle. A comprehensive explanation that captures all of the essential aspects is necessarily complex. There are also many simplified explanations, but all leave significant parts of the phenomenon unexplained, while some also have elements that are simply incorrect. == Simplified physical explanations of lift on an airfoil == An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag. Most simplified explanations follow one of two basic approaches, based either on Newton's laws of motion or on Bernoulli's principle. === Explanation based on flow deflection and Newton's laws === An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law, the air must exert an equal and opposite (upward) force on the airfoil, which is lift. As the airflow approaches the airfoil it is curving upward, but as it passes the airfoil it changes direction and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then Newton's third law requires the air to exert an upward force on the airfoil; thus a reaction force, lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing. The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the airfoil accounts for much of the downward-turning action. This explanation is correct but it is incomplete. It does not explain how the airfoil can impart downward turning to a much deeper swath of the flow than it actually touches. Furthermore, it does not mention that the lift force is exerted by pressure differences, and does not explain how those pressure differences are sustained. ==== Controversy regarding the Coandă effect ==== Some versions of the flow-deflection explanation of lift cite the Coandă effect as the reason the flow is able to follow the convex upper surface of the airfoil. The conventional definition in the aerodynamics field is that the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow. More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some popular references to explain why airflow remains attached to the top side of an airfoil. This is a controversial use of the term "Coandă effect"; the flow following the upper surface simply reflects an absence of boundary-layer separation, thus it is not an example of the Coandă effect. Regardless of whether this broader definition of the "Coandă effect" is applicable, calling it the "Coandă effect" does not provide an explanation, it just gives the phenomenon a name. The ability of a fluid flow to follow a curved path is not dependent on shear forces, viscosity of the fluid, or the presence of a boundary layer. Air flowing around an airfoil, adhering to both upper and lower surfaces, and generating lift, is accepted as a phenomenon in inviscid flow. === Explanations based on an increase in flow speed and Bernoulli's principle === There are two common versions of this explanation, one based on "equal transit time", and one based on "obstruction" of the airflow. ==== False explanation based on equal transit-time ==== The "equal transit time" explanation starts by arguing that the flow over the upper surface is faster than the flow over the lower surface because the path length over the upper surface is longer and must be traversed in equal transit time. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure. It is concluded that the reduced pressure over the upper surface results in upward lift. While it is true that the flow speeds up, a serious flaw in this explanation is that it does not correctly explain what causes the flow to speed up. The longer-path-length explanation is incorrect. No difference in path length is needed, and even when there is a difference, it is typically much too small to explain the observed speed difference. This is because the assumption of equal transit time is wrong when applied to a body generating lift. There is no physical principle that requires equal transit time in all situations and experimental results confirm that for a body generating lift the transit times are not equal. In fact, the air moving past the top of an airfoil generating lift moves much faster than equal transit time predicts. The much higher flow speed over the upper surface can be clearly seen in this animated flow visualization. ==== Obstruction of the airflow ==== Like the equal transit time explanation, the "obstruction" or "streamtube pinching" explanation argues that the flow over the upper surface is faster than the flow over the lower surface, but gives a different reason for the difference in speed. It argues that the curved upper surface acts as more of an obstacle to the flow, forcing the streamlines to pinch closer together, making the streamtubes narrower. When streamtubes become narrower, conservation of mass requires that flow speed must increase. Reduced upper-surface pressure and upward lift follow from the higher speed by Bernoulli's principle, just as in the equal transit time explanation. Sometimes an analogy is made to a venturi nozzle, claiming the upper surface of the wing acts like a venturi nozzle to constrict the flow. One serious flaw in the obstruction explanation is that it does not explain how streamtube pinching comes about, or why it is greater over the upper surface than the lower surface. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense, but it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction or obstruction do not predict experimental results. Another flaw is that conservation of mass is not a satisfying physical reason why the flow would speed up. Effectively explaining the acceleration of an object requires identifying the force that accelerates it. ==== Issues common to both versions of the Bernoulli-based explanation ==== A serious flaw common to all the Bernoulli-based explanations is that they imply that a speed difference can arise from causes other than a pressure difference, and that the speed difference then leads to a pressure difference, by Bernoulli's principle. This implied one-way causation is a misconception. The real relationship between pressure and flow speed is a mutual interaction. As explained below under a more comprehensive physical explanation, producing a lift force requires maintaining pressure differences in both the vertical and horizontal directions. The Bernoulli-only explanations do not explain how the pressure differences in the vertical direction are sustained. That is, they leave out the flow-deflection part of the interaction. Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift. == Basic attributes of lift == Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed. === Pressure differences === Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside. These pressure differences arise in conjunction with the curved airflow. When a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. This direct relationship between curved streamlines and pressure differences, sometimes called the streamline curvature theorem, was derived from Newton's second law by Leonhard Euler in 1754: d ⁡ p d ⁡ R = ρ v 2 R {\displaystyle {\frac {\operatorname {d} p}{\operatorname {d} R}}=\rho {\frac {v^{2}}{R}}} The left side of this equation represents the pressure difference perpendicular to the fluid flow. On the right side of the equation, ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞), the pressure difference is zero. === Angle of attack === The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil generates zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles, a symmetrical airfoil generates a lift force roughly proportional to the angle of attack. As the angle of attack increases, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled. === Airfoil shape === The maximum lift force that can be generated by an airfoil at a given airspeed depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases the maximum lift at a given airspeed. Cambered airfoils generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward. When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upward. This explains how a plane can fly upside down. === Flow conditions === The ambient flow conditions which affect lift include the fluid density, viscosity and speed of flow. Density is affected by temperature, and by the medium's acoustic velocity – i.e. by compressibility effects. === Air speed and density === Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In calculations it is convenient to quantify lift in terms of a lift coefficient based on these factors. === Boundary layer and profile drag === No matter how smooth the surface of an airfoil seems, any surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions relative to their original velocities. The result is that when the air is viewed as a continuous material, it is seen to be unable to slide along the surface, and the air's velocity relative to the airfoil decreases to nearly zero at the surface (i.e., the air molecules "stick" to the surface instead of sliding along it), something known as the no-slip condition. Because the air at the surface has near-zero velocity but the air away from the surface is moving, there is a thin boundary layer in which air close to the surface is subjected to a shearing motion. The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin friction drag. Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin friction drag. Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin friction drag. The total of the skin friction drag and the viscosity-related pressure drag is usually called the profile drag. === Stalling === An airfoil's maximum lift at a given airspeed is limited by boundary-layer separation. As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall, or stalling. At angles of attack above the stall, lift is significantly reduced, though it does not drop to zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 1.5 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps and leading-edge devices deployed. === Bluff bodies === The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, in addition to a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding. Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations. For instance, the flow around a circular cylinder generates a Kármán vortex street: vortices being shed in an alternating fashion from the cylinder's sides. The oscillatory nature of the flow produces a fluctuating lift force on the cylinder, even though the net (mean) force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number, which depends on the Reynolds number of the flow. For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys. In the Magnus effect, a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow. == A more comprehensive physical explanation == As described above under "Simplified physical explanations of lift on an airfoil", there are two main popular explanations: one based on downward deflection of the flow (Newton's laws), and one based on pressure differences accompanied by changes in flow speed (Bernoulli's principle). Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and pressure differences (including changes in flow speed associated with the pressure differences), and requires looking at the flow in more detail. === Lift at the airfoil surface === The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift. The net force exerted by the air occurs as a pressure difference over the airfoil's surfaces. Pressure in a fluid is always positive in an absolute sense, so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift. The pressure difference which results in lift acts directly on the airfoil surfaces; however, understanding how the pressure difference is produced requires understanding what the flow does over a wider area. === The wider flow around the airfoil === An airfoil affects the speed and direction of the flow over a wide area, producing a pattern called a velocity field. When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward leaving the air far behind the airfoil in the same state as the oncoming flow far ahead. The flow above the upper surface is sped up, while the flow below the airfoil is slowed down. Together with the upward deflection of air in front and the downward deflection of the air immediately behind, this establishes a net circulatory component of the flow. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows. The pressure is also affected over a wide area, in a pattern of non-uniform pressure called a pressure field. When an airfoil produces lift, there is a diffuse region of low pressure above the airfoil, and usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this pressure field. === Mutual interaction of pressure differences and changes in flow velocity === The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure field around an airfoil figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region. According to Newton's second law, a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the accompanying pressure field diagram indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have sufficient positive camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it. Some explanations that refer to the "Coandă effect" suggest that viscosity plays a key role in the downward turning, but this is false. (see above under "Controversy regarding the Coandă effect"). The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation. The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil is slowed down as it enters and then sped back up as it leaves. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle, which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed. Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction. A pressure difference can exist only if something is there for it to push against. In aerodynamic flow, the pressure difference pushes against the air's inertia, as the air is accelerated by the pressure difference. This is why the air's mass is part of the calculation, and why lift depends on air density. Sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure in a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that flow along the surface follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction. === How simpler explanations fall short === Producing a lift force requires both downward turning of the flow and changes in flow speed consistent with Bernoulli's principle. Each of the simplified explanations given above in Simplified physical explanations of lift on an airfoil falls short by trying to explain lift in terms of only one or the other, thus explaining only part of the phenomenon and leaving other parts unexplained. == Quantifying lift == === Pressure integration === When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the airfoil surface. L = ∮ p n ⋅ k d S , {\displaystyle L=\oint p\mathbf {n} \cdot \mathbf {k} \;\mathrm {d} S,} where: S is the projected (planform) area of the airfoil, measured normal to the mean airflow; n is the normal unit vector pointing into the wing; k is the vertical unit vector, normal to the freestream direction. The above lift equation neglects the skin friction forces, which are small compared to the pressure forces. By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag Dp (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j, we obtain the side force Y. D p = ∮ p n ⋅ i d S , Y = ∮ p n ⋅ j d S . {\displaystyle {\begin{aligned}D_{p}&=\oint p\mathbf {n} \cdot \mathbf {i} \;\mathrm {d} S,\\Y&=\oint p\mathbf {n} \cdot \mathbf {j} \;\mathrm {d} S.\end{aligned}}} The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth. === Lift coefficient === Lift depends on the size of the wing, being approximately proportional to the wing area. It is often convenient to quantify the lift of a given airfoil by its lift coefficient C L {\displaystyle C_{L}} , which defines its overall lift in terms of a unit area of the wing. If the value of C L {\displaystyle C_{L}} for a wing at a specified angle of attack is given, then the lift produced for specific flow conditions can be determined: L = 1 2 ρ v 2 S C L {\displaystyle L={\tfrac {1}{2}}\rho v^{2}SC_{L}} where L {\displaystyle L} is the lift force ρ {\displaystyle \rho } is the air density v {\displaystyle v} is the velocity or true airspeed S {\displaystyle S} is the planform (projected) wing area C L {\displaystyle C_{L}} is the lift coefficient at the desired angle of attack, Mach number, and Reynolds number == Mathematical theories of lift == Mathematical theories of lift are based on continuum fluid mechanics, assuming that air flows as a continuous fluid. Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles: Conservation of momentum, which is a consequence of Newton's laws of motion, especially Newton's second law which relates the net force on an element of air to its rate of momentum change, Conservation of mass, including the assumption that the airfoil's surface is impermeable for the air flowing around, and Conservation of energy, which says that energy is neither created nor destroyed. Because an airfoil affects the flow in a wide area around it, the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoil. To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of computational fluid dynamics (CFD). Determining the net aerodynamic force from a CFD solution requires "adding up" (integrating) the forces due to pressure and shear determined by the CFD over every surface element of the airfoil as described under "pressure integration". The Navier–Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy, and requires use of the Reynolds-averaged Navier–Stokes equations (RANS). Simpler but less accurate theories have also been developed. === Navier–Stokes (NS) equations === These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity, the Fourier heat conduction law, an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid. In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift with high accuracy in any situation in ordinary atmospheric flight. However, airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful computer. So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly. === Reynolds-averaged Navier–Stokes (RANS) equations === These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense). A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. The amount of computation required is a minuscule fraction (billionths) of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is adequate for practical aircraft design. Lift predicted by RANS is usually within a few percent of the actual lift. === Inviscid-flow equations (Euler or potential) === The Euler equations are the NS equations without the viscosity, heat conduction, and turbulence effects. As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they do not lend themselves to exact analytic solutions. Further simplification is available through potential flow theory, which reduces the number of unknowns to be determined, and makes analytic solutions possible in some cases, as described below. Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10–20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift. In potential-flow theory, the flow is assumed to be irrotational, i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential. A flow represented in this way is called potential flow. In potential-flow theory, the flow is assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation (Laplace's equation) to be solved for the potential is linear, which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping, a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil. A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation. Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See "Circulation and the Kutta–Joukowski theorem" below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut, a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition, which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition. === Linearized potential flow === This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small. The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC. === Circulation and the Kutta–Joukowski theorem === When an airfoil generates lift, several components of the overall velocity field contribute to a net circulation of air around it: the upward flow ahead of the airfoil, the accelerated flow above, the decelerated flow below, and the downward flow behind. The circulation can be understood as the total amount of "spinning" (or vorticity) of an inviscid fluid around the airfoil. The Kutta–Joukowski theorem relates the lift per unit width of span of a two-dimensional airfoil to this circulation component of the flow. It is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil. Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect. The Kutta–Joukowski model does not predict how much circulation or lift a two-dimensional airfoil produces. Calculating the lift per unit span using Kutta–Joukowski requires a known value for the circulation. In particular, if the Kutta condition is met, in which the rear stagnation point moves to the airfoil trailing edge and attaches there for the duration of flight, the lift can be calculated theoretically through the conformal mapping method. The lift generated by a conventional airfoil is dictated by both its design and the flight conditions, such as forward velocity, angle of attack and air density. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps. In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation. == Three-dimensional flow == The flow around a three-dimensional wing involves significant additional issues, especially relating to the wing tips. For a wing of low aspect ratio, such as a typical delta wing, two-dimensional theories may provide a poor model and three-dimensional flow effects can dominate. Even for wings of high aspect ratio, the three-dimensional effects associated with finite span can affect the whole span, not just close to the tips. === Wing tips and spanwise distribution === The vertical pressure gradient at the wing tips causes air to flow sideways, out from under the wing then up and back over the upper surface. This reduces the pressure gradient at the wing tip, therefore also reducing lift. The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right. This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right. There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow. The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag. The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet. === Horseshoe vortex system === The wingtip flow leaving the wing creates a tip vortex. As the main vortex sheet passes downstream from the trailing edge, it rolls up at its outer edges, merging with the tip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake. In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, called 'bound vorticity', which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907. Given the distribution of bound vorticity and the vorticity in the wake, the Biot–Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system. In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing. Because the velocity is deduced from the vorticity in such theories, some authors describe the situation to imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex", for example. But attributing mechanical cause-and-effect between the vorticity and the velocity in this way is not consistent with the physics. The velocity perturbations in the flow around a wing are in fact produced by the pressure field. == Manifestations of lift in the farfield == === Integrated force/momentum balance in lifting flows === The flow around a lifting airfoil must satisfy Newton's second law regarding conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume, where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral), is equal to the integrated time rate of change (material derivative) of the momentum of fluid parcels passing through the interior of the control volume. For a steady flow, this can be expressed in the form of the net surface integral of the flux of momentum through the boundary. The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span − L ′ {\displaystyle -L'} is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure. The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest. For the free-air case (no ground plane), the force − L ′ {\displaystyle -L'} exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom. For a square or circle, the momentum fluxes and pressure differences account for half the lift each. For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half. The results of all of the control-volume analyses described above are consistent with the Kutta–Joukowski theorem described above. Both the tall rectangle and circle control volumes have been used in derivations of the theorem. === Lift reacted by overpressure on the ground under an airplane === An airfoil produces a pressure field in the surrounding air, as explained under "The wider flow around the airfoil" above. The pressure differences associated with this field die off gradually, becoming very small at large distances, but never disappearing altogether. Below the airplane, the pressure field persists as a positive pressure disturbance that reaches the ground, forming a pattern of slightly-higher-than-ambient pressure on the ground, as shown on the right. Although the pressure differences are very small far below the airplane, they are spread over a wide area and add up to a substantial force. For steady, level flight, the integrated force due to the pressure differences is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and thus there is no integrated accumulation of vertical momentum in the atmosphere, as was noted by Lanchester early in the development of modern aerodynamics. == See also == Drag coefficient Flow separation Fluid dynamics Foil (fluid mechanics) Küssner effect Lift-to-drag ratio Lifting-line theory Spoiler (automotive) == Footnotes == == References == == Further reading == == External links == Discussion of the apparent "conflict" between the various explanations of lift Archived July 25, 2021, at the Wayback Machine NASA tutorial, with animation, describing lift Archived March 9, 2009, at the Wayback Machine NASA FoilSim II 1.5 beta. Lift simulator Explanation of Lift with animation of fluid flow around an airfoil Archived June 13, 2021, at the Wayback Machine A treatment of why and how wings generate lift that focuses on pressure Archived December 19, 2006, at the Wayback Machine Physics of Flight – reviewed Archived March 9, 2021, at the Wayback Machine. Online paper by Prof. Dr. Klaus Weltner How do Wings Work? Holger Babinsky Bernoulli Or Newton: Who's Right About Lift? Archived September 24, 2015, at the Wayback Machine Plane and Pilot magazine One Minute Physics How Does a Wing actually work? Archived May 20, 2021, at the Wayback Machine (YouTube video) How wings really work, University of Cambridge Archived June 14, 2021, at the Wayback Machine Holger Babinsky (referred by "One Minute Physics How Does a Wing actually work?" YouTube video) From Summit to Seafloor – Lifted Weight as a Function of Altitude and Depth by Rolf Steinegger Joukowski Transform Interactive WebApp Archived October 19, 2019, at the Wayback Machine How Planes Fly Archived June 11, 2021, at the Wayback Machine YouTube video presentation by Krzysztof Fidkowski, associate professor of Aerospace Engineering at the University of Michigan
Wikipedia/Lift_(physics)
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion (change with respect to place), quantitative change (change with respect to size or number), qualitative change, and substantial change ("coming to be" [coming into existence, 'generation'] or "passing away" [no longer existing, 'corruption']). To Aristotle, 'physics' was a broad field including subjects which would now be called the philosophy of mind, sensory experience, memory, anatomy and biology. It constitutes the foundation of the thought underlying many of his works. Key concepts of Aristotelian physics include the structuring of the cosmos into concentric spheres, with the Earth at the centre and celestial spheres around it. The terrestrial sphere was made of four elements, namely earth, air, fire, and water, subject to change and decay. The celestial spheres were made of a fifth element, an unchangeable aether. Objects made of these elements have natural motions: those of earth and water tend to fall; those of air and fire, to rise. The speed of such motion depends on their weights and the density of the medium. Aristotle argued that a vacuum could not exist as speeds would become infinite. Aristotle described four causes or explanations of change as seen on earth: the material, formal, efficient, and final causes of things. As regards living things, Aristotle's biology relied on observation of what he considered to be ‘natural kinds’, both those he considered basic and the groups to which he considered these belonged. He did not conduct experiments in the modern sense, but relied on amassing data, observational procedures such as dissection, and making hypotheses about relationships between measurable quantities such as body size and lifespan. == Methods == nature is everywhere the cause of order. While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump. In claiming novelty for their doctrines, those natural philosophers who developed the "new science" of the seventeenth century frequently contrasted "Aristotelian" physics with their own. Physics of the former sort, so they claimed, emphasized the qualitative at the expense of the quantitative, neglected mathematics and its proper role in physics (particularly in the analysis of local motion), and relied on such suspect explanatory principles as final causes and "occult" essences. Yet in his Physics Aristotle characterizes physics or the "science of nature" as pertaining to magnitudes (megethê), motion (or "process" or "gradual change" – kinêsis), and time (chronon) (Phys III.4 202b30–1). Indeed, the Physics is largely concerned with an analysis of motion, particularly local motion, and the other concepts that Aristotle believes are requisite to that analysis. There are clear differences between modern and Aristotelian physics, the main being the use of mathematics, largely absent in Aristotle. Some recent studies, however, have re-evaluated Aristotle's physics, stressing both its empirical validity and its continuity with modern physics. == Concepts == === Elements and spheres === Aristotle divided his universe into "terrestrial spheres" which were "corruptible" and where humans lived, and moving but otherwise unchanging celestial spheres. Aristotle believed that four classical elements make up everything in the terrestrial spheres: earth, air, fire and water.[a] He also held that the heavens are made of a special weightless and incorruptible (i.e. unchangeable) fifth element called "aether". Aether also has the name "quintessence", meaning, literally, "fifth being". Aristotle considered heavy matter such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition. The four classical elements were not invented by Aristotle; they were originated by Empedocles. During the Scientific Revolution, the ancient theory of classical elements was found to be incorrect, and was replaced by the empirically tested concept of chemical elements. ==== Celestial spheres ==== According to Aristotle, the Sun, Moon, planets and stars – are embedded in perfectly concentric "crystal spheres" that rotate eternally at fixed rates. Because the celestial spheres are incapable of any change except rotation, the terrestrial sphere of fire must account for the heat, starlight and occasional meteorites. The lowest, lunar sphere is the only celestial sphere that actually comes in contact with the sublunary orb's changeable, terrestrial matter, dragging the rarefied fire and air along underneath as it rotates. Like Homer's æthere (αἰθήρ) – the "pure air" of Mount Olympus – was the divine counterpart of the air breathed by mortal beings (άήρ, aer). The celestial spheres are composed of the special element aether, eternal and unchanging, the sole capability of which is a uniform circular motion at a given rate (relative to the diurnal motion of the outermost sphere of fixed stars). The concentric, aetherial, cheek-by-jowl "crystal spheres" that carry the Sun, Moon and stars move eternally with unchanging circular motion. Spheres are embedded within spheres to account for the "wandering stars" (i.e. the planets, which, in comparison with the Sun, Moon and stars, appear to move erratically). Mercury, Venus, Mars, Jupiter, and Saturn are the only planets (including minor planets) which were visible before the invention of the telescope, which is why Neptune and Uranus are not included, nor are any asteroids. Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle model. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of fifty spheres. An unmoved mover is assumed for each sphere, including a "prime mover" for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, being immaterial and dimensionless) but are the final cause of the spheres' motion, i.e. they explain it in a way that's similar to the explanation "the soul is moved by beauty". === Terrestrial change === Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth). Any apparent change from cold and wet into the hot and dry (fire) is actually a two-step process, as first one of the property changes, then the other. These properties are predicated of an actual substance relative to the work it is able to do; that of heating or chilling and of desiccating or moistening. The four elements exist only with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for "coming to be" and "passing away" – or, in the terms of Aristotle's On Generation and Corruption (Περὶ γενέσεως καὶ φθορᾶς), "generation" and "corruption". === Natural place === The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the elements earth and water, that place is the center of the (geocentric) universe; the natural place of water is a concentric shell around the Earth because earth is heavier; it sinks in water. The natural place of air is likewise a concentric shell surrounding that of water; bubbles rise in water. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere (carrying the Moon). In Book Delta of his Physics (IV.5), Aristotle defines topos (place) in terms of two bodies, one of which contains the other: a "place" is where the inner surface of the former (the containing body) touches the contained body. This definition remained dominant until the beginning of the 17th century, even though it had been questioned and debated by philosophers since antiquity. The most significant early critique was made in terms of geometry by the 11th-century Arab polymath al-Hasan Ibn al-Haytham (Alhazen) in his Discourse on Place. === Natural motion === Terrestrial objects rise or fall, to a greater or lesser extent, according to the ratio of the four elements of which they are composed. For example, earth, the heaviest element, and water, fall toward the center of the cosmos; hence the Earth and for the most part its oceans, will have already come to rest there. At the opposite extreme, the lightest elements, air and especially fire, rise up and away from the center. The elements are not proper substances in Aristotelian theory (or the modern sense of the word). Instead, they are abstractions used to explain the varying natures and behaviors of actual materials in terms of ratios between them. Motion and change are closely related in Aristotelian physics. Motion, according to Aristotle, involved a change from potentiality to actuality. He gave example of four types of change, namely change in substance, in quality, in quantity and in place. Aristotle proposed that the speed at which two identically shaped objects sink or fall is directly proportional to their weights and inversely proportional to the density of the medium through which they move. While describing their terminal velocity, Aristotle must stipulate that there would be no limit at which to compare the speed of atoms falling through a vacuum, (they could move indefinitely fast because there would be no particular place for them to come to rest in the void). Now however it is understood that at any time prior to achieving terminal velocity in a relatively resistance-free medium like air, two such objects are expected to have nearly identical speeds because both are experiencing a force of gravity proportional to their masses and have thus been accelerating at nearly the same rate. This became especially apparent from the eighteenth century when partial vacuum experiments began to be made, but some two hundred years earlier Galileo had already demonstrated that objects of different weights reach the ground in similar times. === Unnatural motion === Apart from the natural tendency of terrestrial exhalations to rise and objects to fall, unnatural or forced motion from side to side results from the turbulent collision and sliding of the objects as well as transmutation between the elements (On Generation and Corruption). Aristotle phrased this principle as: "Everything that moves is moved by something else. (Omne quod movetur ab alio movetur.)" When the cause ceases, so does the effect. The cause, according to Aristotle, must be a power (i.e., force) that drives the body as long as the external agent remains in direct contact. Aristotle went on to say that the velocity of the body is directly proportional to the force imparted and inversely proportional to the resistance of the medium in which the motion takes place. This gives the law in today's notation velocity ∝ imparted power resistance {\displaystyle {\text{velocity}}\propto {\frac {\text{imparted power}}{\text{resistance}}}} This law presented three difficulties that Aristotle was aware of. The first is that if the imparted power is less than the resistance, then in reality it will not move the body, but Aristotle's relation says otherwise. Second, what is the source of the increase in imparted power required to increase the velocity of a freely falling body? Third, what is the imparted power that keeps a projectile in motion after it leaves the agent of projection? Aristotle, in his book Physics, Book 8, Chapter 10, 267a 4, proposed the following solution to the third problem in the case of a shot arrow. The bowstring or hand imparts a certain 'power of being a movent' to the air in contact with it, so that this imparted force is transmitted to the next layer of air, and so on, thus keeping the arrow in motion until the power gradually dissipates. ==== Chance ==== In his Physics Aristotle examines accidents (συμβεβηκός, symbebekòs) that have no cause but chance. "Nor is there any definite cause for an accident, but only chance (τύχη, týche), namely an indefinite (ἀόριστον, aóriston) cause" (Metaphysics V, 1025a25). It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not (Metaphysics VI, 1027a29). === Continuum and vacuum === Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term "atom"). As a place without anything existing at or within it, Aristotle argued against the possibility of a vacuum or void. Because he believed that the speed of an object's motion is proportional to the force being applied (or, in the case of natural motion, the object's weight) and inversely proportional to the density of the medium, he reasoned that objects moving in a void would move indefinitely fast – and thus any and all objects surrounding the void would immediately fill it. The void, therefore, could never form. The "voids" of modern-day astronomy (such as the Local Void adjacent to our own galaxy) have the opposite effect: ultimately, bodies off-center are ejected from the void due to the gravity of the material outside. === Four causes === According to Aristotle, there are four ways to explain the aitia or causes of change. He writes that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause." Aristotle held that there were four kinds of causes. ==== Material ==== The material cause of a thing is that of which it is made. For a table, that might be wood; for a statue, that might be bronze or marble. "In one way we say that the aition is that out of which. as existing, something comes to be, like the bronze for the statue, the silver for the phial, and their genera" (194b2 3—6). By "genera", Aristotle means more general ways of classifying the matter (e.g. "metal"; "material"); and that will become important. A little later on, he broadens the range of the material cause to include letters (of syllables), fire and the other elements (of physical bodies), parts (of wholes), and even premises (of conclusions: Aristotle re-iterates this claim, in slightly different terms, in An. Post II. 11). ==== Formal ==== The formal cause of a thing is the essential property that makes it the kind of thing it is. In Metaphysics Book Α Aristotle emphasizes that form is closely related to essence and definition. He says for example that the ratio 2:1, and number in general, is the cause of the octave. "Another [cause] is the form and the exemplar: this is the formula (logos) of the essence (to ti en einai), and its genera, for instance the ratio 2:1 of the octave" (Phys 11.3 194b26—8)... Form is not just shape... We are asking (and this is the connection with essence, particularly in its canonical Aristotelian formulation) what it is to be some thing. And it is a feature of musical harmonics (first noted and wondered at by the Pythagoreans) that intervals of this type do indeed exhibit this ratio in some form in the instruments used to create them (the length of pipes, of strings, etc.). In some sense, the ratio explains what all the intervals have in common, why they turn out the same. ==== Efficient ==== The efficient cause of a thing is the primary agency by which its matter took its form. For example, the efficient cause of a baby is a parent of the same species and that of a table is a carpenter, who knows the form of the table. In his Physics II, 194b29—32, Aristotle writes: "there is that which is the primary originator of the change and of its cessation, such as the deliberator who is responsible [sc. for the action] and the father of the child, and in general the producer of the thing produced and the changer of the thing changed". Aristotle’s examples here are instructive: one case of mental and one of physical causation, followed by a perfectly general characterization. But they conceal (or at any rate fail to make patent) a crucial feature of Aristotle’s concept of efficient causation, and one which serves to distinguish it from most modern homonyms. For Aristotle, any process requires a constantly operative efficient cause as long as it continues. This commitment appears most starkly to modern eyes in Aristotle’s discussion of projectile motion: what keeps the projectile moving after it leaves the hand? "Impetus", "momentum", much less "inertia", are not possible answers. There must be a mover, distinct (at least in some sense) from the thing moved, which is exercising its motive capacity at every moment of the projectile’s flight (see Phys VIII. 10 266b29—267a11). Similarly, in every case of animal generation, there is always some thing responsible for the continuity of that generation, although it may do so by way of some intervening instrument (Phys II.3 194b35—195a3). ==== Final ==== The final cause is that for the sake of which something takes place, its aim or teleological purpose: for a germinating seed, it is the adult plant, for a ball at the top of a ramp, it is coming to rest at the bottom, for an eye, it is seeing, for a knife, it is cutting. Goals have an explanatory function: that is a commonplace, at least in the context of action-ascriptions. Less of a commonplace is the view espoused by Aristotle, that finality and purpose are to be found throughout nature, which is for him the realm of those things which contain within themselves principles of movement and rest (i.e. efficient causes); thus it makes sense to attribute purposes not only to natural things themselves, but also to their parts: the parts of a natural whole exist for the sake of the whole. As Aristotle himself notes, "for the sake of" locutions are ambiguous: "A is for the sake of B" may mean that A exists or is undertaken in order to bring B about; or it may mean that A is for B’s benefit (An II.4 415b2—3, 20—1); but both types of finality have, he thinks, a crucial role to play in natural, as well as deliberative, contexts. Thus a man may exercise for the sake of his health: and so "health", and not just the hope of achieving it, is the cause of his action (this distinction is not trivial). But the eyelids are for the sake of the eye (to protect it: PA II.1 3) and the eye for the sake of the animal as a whole (to help it function properly: cf. An II.7). === Biology === According to Aristotle, the science of living things proceeds by gathering observations about each natural kind of animal, organizing them into genera and species (the differentiae in History of Animals) and then going on to study the causes (in Parts of Animals and Generation of Animals, his three main biological works). The four causes of animal generation can be summarized as follows. The mother and father represent the material and efficient causes, respectively. The mother provides the matter out of which the embryo is formed, while the father provides the agency that informs that material and triggers its development. The formal cause is the definition of the animal’s substantial being (GA I.1 715a4: ho logos tês ousias). The final cause is the adult form, which is the end for the sake of which development takes place. ==== Organism and mechanism ==== The four elements make up the uniform materials such as blood, flesh and bone, which are themselves the matter out of which are created the non-uniform organs of the body (e.g. the heart, liver and hands) "which in turn, as parts, are matter for the functioning body as a whole (PA II. 1 646a 13—24)". [There] is a certain obvious conceptual economy about the view that in natural processes naturally constituted things simply seek to realize in full actuality the potentials contained within them (indeed, this is what is for them to be natural); on the other hand, as the detractors of Aristotelianism from the seventeenth century on were not slow to point out, this economy is won at the expense of any serious empirical content. Mechanism, at least as practiced by Aristotle’s contemporaries and predecessors, may have been explanatorily inadequate – but at least it was an attempt at a general account given in reductive terms of the lawlike connections between things. Simply introducing what later reductionists were to scoff at as "occult qualities" does not explain – it merely, in the manner of Molière’s famous satirical joke, serves to re-describe the effect. Formal talk, or so it is said, is vacuous.Things are not however quite as bleak as this. For one thing, there’s no point in trying to engage in reductionist science if you don’t have the wherewithal, empirical and conceptual, to do so successfully: science shouldn't be simply unsubstantiated speculative metaphysics. But more than that, there is a point to describing the world in such teleologically loaded terms: it makes sense of things in a way that atomist speculations do not. And further, Aristotle’s talk of species-forms is not as empty as his opponents would insinuate. He doesn't simply say that things do what they do because that's the sort of thing they do: the whole point of his classificatory biology, most clearly exemplified in PA, is to show what sorts of function go with what, which presuppose which and which are subservient to which. And in this sense, formal or functional biology is susceptible of a type of reductionism. We start, he tells us, with the basic animal kinds which we all pre-theoretically (although not indefeasibly) recognize (cf. PA I.4): but we then go on to show how their parts relate to one another: why it is, for instance, that only blooded creatures have lungs, and how certain structures in one species are analogous or homologous to those in another (such as scales in fish, feathers in birds, hair in mammals). And the answers, for Aristotle, are to be found in the economy of functions, and how they all contribute to the overall well-being (the final cause in this sense) of the animal. See also Organic form. ==== Psychology ==== According to Aristotle, perception and thought are similar, though not exactly alike in that perception is concerned only with the external objects that are acting on our sense organs at any given time, whereas we can think about anything we choose. Thought is about universal forms, in so far as they have been successfully understood, based on our memory of having encountered instances of those forms directly. Aristotle’s theory of cognition rests on two central pillars: his account of perception and his account of thought. Together, they make up a significant portion of his psychological writings, and his discussion of other mental states depends critically on them. These two activities, moreover, are conceived of in an analogous manner, at least with regard to their most basic forms. Each activity is triggered by its object – each, that is, is about the very thing that brings it about. This simple causal account explains the reliability of cognition: perception and thought are, in effect, transducers, bringing information about the world into our cognitive systems, because, at least in their most basic forms, they are infallibly about the causes that bring them about (An III.4 429a13–18). Other, more complex mental states are far from infallible. But they are still tethered to the world, in so far as they rest on the unambiguous and direct contact perception and thought enjoy with their objects. == Medieval commentary == The Aristotelian theory of motion came under criticism and modification during the Middle Ages. Modifications began with John Philoponus in the 6th century, who partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force" but modified it to include his idea that a hurled body also acquires an inclination (or "motive power") for movement away from whatever caused it to move, an inclination that secures its continued motion. This impressed virtue would be temporary and self-expending, meaning that all motion would tend toward the form of Aristotle's natural motion. In The Book of Healing (1027), the 11th-century Persian polymath Avicenna developed Philoponean theory into the first coherent alternative to Aristotelian theory. Inclinations in the Avicennan theory of motion were not self-consuming but permanent forces whose effects were dissipated only as a result of external agents such as air resistance, making him "the first to conceive such a permanent type of impressed virtue for non-natural motion". Such a self-motion (mayl) is "almost the opposite of the Aristotelian conception of violent motion of the projectile type, and it is rather reminiscent of the principle of inertia, i.e. Newton's first law of motion." The eldest Banū Mūsā brother, Ja'far Muhammad ibn Mūsā ibn Shākir (800-873), wrote the Astral Motion and The Force of Attraction. The Persian physicist, Ibn al-Haytham (965-1039) discussed the theory of attraction between bodies. It seems that he was aware of the magnitude of acceleration due to gravity and he discovered that the heavenly bodies "were accountable to the laws of physics". During his debate with Avicenna, al-Biruni also criticized the Aristotelian theory of gravity firstly for denying the existence of levity or gravity in the celestial spheres; and, secondly, for its notion of circular motion being an innate property of the heavenly bodies. Hibat Allah Abu'l-Barakat al-Baghdaadi (1080–1165) wrote al-Mu'tabar, a critique of Aristotelian physics where he negated Aristotle's idea that a constant force produces uniform motion, as he realized that a force applied continuously produces acceleration, a fundamental law of classical mechanics and an early foreshadowing of Newton's second law of motion. Like Newton, he described acceleration as the rate of change of speed. In the 14th century, Jean Buridan developed the theory of impetus as an alternative to the Aristotelian theory of motion. The theory of impetus was a precursor to the concepts of inertia and momentum in classical mechanics. Buridan and Albert of Saxony also refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. In the 16th century, Al-Birjandi discussed the possibility of the Earth's rotation and, in his analysis of what might occur if the Earth were rotating, developed a hypothesis similar to Galileo's notion of "circular inertia". He described it in terms of the following observational test: "The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane (sath) of the horizon; this is witnessed by experience (tajriba). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived (hissi) horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks." == Life and death of Aristotelian physics == The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula. In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth. According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments. In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum). Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate. Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is: d y d t ∝ y {\displaystyle {dy \over dt}\propto y} with the condition that y ( 0 ) = 0 {\displaystyle y(0)=0} . Galileo demonstrated that this system would stay at y = 0 {\displaystyle y=0} for all time. If a perturbation set the system into motion somehow, the object would pick up speed exponentially in time, not linearly. Standing on the surface of the Moon in 1971, David Scott famously repeated Galileo's experiment by dropping a feather and a hammer from each hand at the same time. In the absence of a substantial atmosphere, the two objects fell and hit the Moon's surface at the same time. The first convincing mathematical theory of gravity – in which two masses are attracted toward each other by a force whose effect decreases according to the inverse square of the distance between them – was Newton's law of universal gravitation. This, in turn, was replaced by the General theory of relativity due to Albert Einstein. == Modern evaluations of Aristotle's physics == Modern scholars differ in their opinions of whether Aristotle's physics were sufficiently based on empirical observations to qualify as science, or else whether they were derived primarily from philosophical speculation and thus fail to satisfy the scientific method. Carlo Rovelli has argued that Aristotle's physics are an accurate and non-intuitive representation of a particular domain (motion in fluids), and thus are just as scientific as Newton's laws of motion, which also are accurate in some domains while failing in others (i.e. special and general relativity). == As listed in the Corpus Aristotelicum == == See also == Minima naturalia, a hylomorphic concept suggested by Aristotle broadly analogous in Peripatetic and Scholastic physical speculation to the atoms of Epicureanism == Notes == == References == == Sources == H. Carteron (1965) "Does Aristotle Have a Mechanics?" in Articles on Aristotle 1. Science eds. Jonathan Barnes, Malcolm Schofield, Richard Sorabji (London: General Duckworth and Company Limited), 161–174. Ragep, F. Jamil (2001). "Tusi and Copernicus: The Earth's Motion in Context". Science in Context. 14 (1–2). Cambridge University Press: 145–163. doi:10.1017/s0269889701000060. S2CID 145372613. Ragep, F. Jamil; Al-Qushji, Ali (2001). "Freeing Astronomy from Philosophy: An Aspect of Islamic Influence on Science". Osiris. 2nd Series. 16 (Science in Theistic Contexts: Cognitive Dimensions): 49–64 and 66–71. Bibcode:2001Osir...16...49R. doi:10.1086/649338. S2CID 142586786. == Further reading == Katalin Martinás, "Aristotelian Thermodynamics" in Thermodynamics: history and philosophy: facts, trends, debates (Veszprém, Hungary 23–28 July 1990), pp. 285–303.
Wikipedia/Aristotelian_theory_of_gravity
The second law of thermodynamics is a physical law based on universal empirical observation concerning heat and energy interconversions. A simple statement of the law is that heat always flows spontaneously from hotter to colder regions of matter (or 'downhill' in terms of the temperature gradient). Another statement is: "Not all heat can be converted into work in a cyclic process." The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system. It predicts whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. For example, the first law allows the process of a cup falling off a table and breaking on the floor, as well as allowing the reverse process of the cup fragments coming back together and 'jumping' back onto the table, while the second law allows the former and denies the latter. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always tend toward a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time. Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, but this has been formally delegated to the zeroth law of thermodynamics. == Introduction == The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat. It can be linked to the law of conservation of energy. Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another. The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. That is, the state of a natural system itself can be reversed, but not without increasing the entropy of the system's surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy. For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change. A heat pump can reverse this heat flow, but the reversal process and the original process, both cause entropy production, thereby increasing the entropy of the system's surroundings. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, S {\displaystyle S} , increases. In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment ( d S {\displaystyle \mathrm {d} S} ) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat ( δ Q {\displaystyle \delta Q} ) to the system of interest, divided by the common thermodynamic temperature ( T ) {\displaystyle (T)} of the system of interest and the auxiliary thermodynamic system: d S = δ Q T (closed system; idealized, reversible process) . {\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system; idealized, reversible process)}}.} Different notations are used for an infinitesimal amount of heat ( δ ) {\displaystyle (\delta )} and infinitesimal change of entropy ( d ) {\displaystyle (\mathrm {d} )} because entropy is a function of state, while heat, like work, is not. For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality d S > δ Q T surr (closed system; actually possible, irreversible process). {\displaystyle \mathrm {d} S>{\frac {\delta Q}{T_{\text{surr}}}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system; actually possible, irreversible process).}}} This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature (T) and the temperature of the surroundings (Tsurr). The equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass), d S = δ Q T (actually possible quasistatic irreversible process without composition change). {\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(actually possible quasistatic irreversible process without composition change).}}} which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry. The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body. == Various statements of the law == The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent. === Carnot's principle === The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are: ...wherever there exists a difference of temperature, motive power can be produced. The production of motive power is then due in steam engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body ... The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric. In modern terms, Carnot's principle may be stated more precisely: The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures. === Clausius statement === The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. His formulation of the second law, which was published in German in 1854, is known as the Clausius statement: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other. Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system. === Kelvin statements === Lord Kelvin expressed the second law in several wordings. It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects. === Equivalence of the Clausius and the Kelvin statements === Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (the drained heat is fully converted to work) in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the right figure. The efficiency of a normal heat engine is η and so the efficiency of the reversed heat engine is 1/η. The net and sole effect of the combined pair of engines is to transfer heat Δ Q = Q ( 1 η − 1 ) {\textstyle \Delta Q=Q\left({\frac {1}{\eta }}-1\right)} from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the first law of thermodynamics, as for the total system's energy to remain the same; Input + Output = 0 ⟹ ( Q + Q c ) − Q η = 0 {\textstyle {\text{Input}}+{\text{Output}}=0\implies (Q+Q_{c})-{\frac {Q}{\eta }}=0} , so therefore Q c = Q ( 1 η − 1 ) {\textstyle Q_{c}=Q\left({\frac {1}{\eta }}-1\right)} , where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) Q η {\displaystyle {\frac {Q}{\eta }}} is obtained by the definition of efficiency of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent. === Planck's proposition === Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law. It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the production of work and cooling of a heat reservoir. === Relation between Kelvin's statement and Planck's proposition === It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by ter Haar and Wergeland. This version, also known as the heat engine statement, of the second law states that It is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work. === Planck's statement === Max Planck stated the second law as follows. Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged. Rather like Planck's statement is that of George Uhlenbeck and G. W. Ford for irreversible phenomena. ... in an irreversible or spontaneous change from one equilibrium state to another (as for example the equalization of temperature of two bodies A and B, when brought in contact) the entropy always increases. === Principle of Carathéodory === Constantin Carathéodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows: In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S. With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, δ Q = T d S {\displaystyle \delta Q=TdS} . Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium. === Planck's principle === In 1926, Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant. This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work." Planck wrote: "The production of heat by friction is irreversible." Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. It is relevant that for a system at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy. A statement that in a sense is complementary to Planck's principle is made by Claus Borgnakke and Richard E. Sonntag. They do not offer it as a full statement of the second law: ... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system. Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy. === Relating the second law to the definition of temperature === The second law has been shown to be equivalent to the internal energy U defined as a convex function of the other extensive properties of the system. That is, when a system is described by stating its internal energy U, an extensive variable, as a function of its entropy S, volume V, and mol number N, i.e. U = U (S, V, N), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy (essentially equivalent to the first TdS equation for V and N held constant): T = ( ∂ U ∂ S ) V , N {\displaystyle T=\left({\frac {\partial U}{\partial S}}\right)_{V,N}} === Second law statements, such as the Clausius inequality, involving radiative fluxes === The Clausius inequality, as well as some other statements of the second law, must be re-stated to have general applicability for all forms of heat transfer, i.e. scenarios involving radiative fluxes. For example, the integrand (đQ/T) of the Clausius expression applies to heat conduction and convection, and the case of ideal infinitesimal blackbody radiation (BR) transfer, but does not apply to most radiative transfer scenarios and in some cases has no physical meaning whatsoever. Consequently, the Clausius inequality was re-stated so that it is applicable to cycles with processes involving any form of heat transfer. The entropy transfer with radiative fluxes ( δ S NetRad \delta S_{\text{NetRad}} ) is taken separately from that due to heat transfer by conduction and convection ( δ Q C C \delta Q_{CC} ), where the temperature is evaluated at the system boundary where the heat transfer occurs. The modified Clausius inequality, for all heat transfer scenarios, can then be expressed as, ∫ cycle ( δ Q C C T b + δ S NetRad ) ≤ 0 {\displaystyle \int _{\text{cycle}}({\frac {\delta Q_{CC}}{T_{b}}}+\delta S_{\text{NetRad}})\leq 0} In a nutshell, the Clausius inequality is saying that when a cycle is completed, the change in the state property S will be zero, so the entropy that was produced during the cycle must have transferred out of the system by heat transfer. The δ \delta (or đ) indicates a path dependent integration. Due to the inherent emission of radiation from all matter, most entropy flux calculations involve incident, reflected and emitted radiative fluxes. The energy and entropy of unpolarized blackbody thermal radiation, is calculated using the spectral energy and entropy radiance expressions derived by Max Planck using equilibrium statistical mechanics, K ν = 2 h c 2 ν 3 exp ⁡ ( h ν k T ) − 1 , {\displaystyle K_{\nu }={\frac {2h}{c^{2}}}{\frac {\nu ^{3}}{\exp \left({\frac {h\nu }{kT}}\right)-1}},} L ν = 2 k ν 2 c 2 ( ( 1 + c 2 K ν 2 h ν 3 ) ln ⁡ ( 1 + c 2 K ν 2 h ν 3 ) − ( c 2 K ν 2 h ν 3 ) ln ⁡ ( c 2 K ν 2 h ν 3 ) ) {\displaystyle L_{\nu }={\frac {2k\nu ^{2}}{c^{2}}}((1+{\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})\ln(1+{\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})-({\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})\ln({\frac {c^{2}K_{\nu }}{2h\nu ^{3}}}))} where c is the speed of light, k is the Boltzmann constant, h is the Planck constant, ν is frequency, and the quantities Kv and Lv are the energy and entropy fluxes per unit frequency, area, and solid angle. In deriving this blackbody spectral entropy radiance, with the goal of deriving the blackbody energy formula, Planck postulated that the energy of a photon was quantized (partly to simplify the mathematics), thereby starting quantum theory. A non-equilibrium statistical mechanics approach has also been used to obtain the same result as Planck, indicating it has wider significance and represents a non-equilibrium entropy. A plot of Kv versus frequency (v) for various values of temperature (T) gives a family of blackbody radiation energy spectra, and likewise for the entropy spectra. For non-blackbody radiation (NBR) emission fluxes, the spectral entropy radiance Lv is found by substituting Kv spectral energy radiance data into the Lv expression (noting that emitted and reflected entropy fluxes are, in general, not independent). For the emission of NBR, including graybody radiation (GR), the resultant emitted entropy flux, or radiance L, has a higher ratio of entropy-to-energy (L/K), than that of BR. That is, the entropy flux of NBR emission is farther removed from the conduction and convection q/T result, than that for BR emission. This observation is consistent with Max Planck's blackbody radiation energy and entropy formulas and is consistent with the fact that blackbody radiation emission represents the maximum emission of entropy for all materials with the same temperature, as well as the maximum entropy emission for all radiation with the same energy radiance. === Generalized conceptual statement of the second law principle === Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) – between system and its environment – up or down. This 'special' category of processes, category IV, is characterized by movement in the direction of low disorder and low uniformity, counteracting the second law tendency towards uniformity and disorder. The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where 'exergy' is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and 'instruction or intelligence', although subjective, is in the context of the set of category IV processes. Consider a category IV example of robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity. There are also situations where the entropy spontaneously decreases by means of energy and entropy transfer. When thermodynamic constraints are not present, spontaneously energy or mass, as well as accompanying entropy, may be transferred out of a system in a progress to reach external equilibrium or uniformity in intensive properties of the system with its surroundings. This occurs spontaneously because the energy or mass transferred from the system to its surroundings results in a higher entropy in the surroundings, that is, it results in higher overall entropy of the system plus its surroundings. Note that this transfer of entropy requires dis-equilibrium in properties, such as a temperature difference. One example of this is the cooling crystallization of water that can occur when the system's surroundings are below freezing temperatures. Unconstrained heat transfer can spontaneously occur, leading to water molecules freezing into a crystallized structure of reduced disorder (sticking together in a certain order due to molecular attraction). The entropy of the system decreases, but the system approaches uniformity with its surroundings (category III). On the other hand, consider the refrigeration of water in a warm environment. Due to refrigeration, as heat is extracted from the water, the temperature and entropy of the water decreases, as the system moves further away from uniformity with its warm surroundings or environment (category IV). The main point, take-away, is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect. == Corollaries == === Perpetual motion of the second kind === Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines. === Carnot's theorem === Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states: All irreversible heat engines between two heat reservoirs are less efficient than a Carnot engine operating between the same reservoirs. All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs. In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle's reversibility and was condemned to be less efficient. Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law. === Clausius inequality === The Clausius theorem (1854) states that in a cyclic process ∮ δ Q T surr ≤ 0. {\displaystyle \oint {\frac {\delta Q}{T_{\text{surr}}}}\leq 0.} The equality holds in the reversible case and the strict inequality holds in the irreversible case, with Tsurr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality. === Thermodynamic temperature === For an arbitrary heat engine, the efficiency is: where Wn is the net work done by the engine per cycle, qH > 0 is the heat added to the engine from a hot reservoir, and qC = −|qC| < 0 is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio |qC| / |qH|. Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures TH and TC must have the same efficiency, that is to say, the efficiency is a function of temperatures only: In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3, where T1 > T2 > T3. This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures T1 and T3, then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below. η 1 = 1 − | q 3 | | q 1 | = 1 − f ( T 1 , T 3 ) {\displaystyle \eta _{1}=1-{\frac {|q_{3}|}{|q_{1}|}}=1-f(T_{1},T_{3})} , η 2 = 1 − | q 2 | | q 1 | = 1 − f ( T 1 , T 2 ) {\displaystyle \eta _{2}=1-{\frac {|q_{2}|}{|q_{1}|}}=1-f(T_{1},T_{2})} , η 3 = 1 − | q 3 | | q 2 | = 1 − f ( T 2 , T 3 ) {\displaystyle \eta _{3}=1-{\frac {|q_{3}|}{|q_{2}|}}=1-f(T_{2},T_{3})} . Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at T2. We also have used the fact that the heat q 2 {\displaystyle q_{2}} passes through the intermediate thermal reservoir at T 2 {\displaystyle T_{2}} without losing its energy. (I.e., q 2 {\displaystyle q_{2}} is not lost during its passage through the reservoir at T 2 {\displaystyle T_{2}} .) This fact can be proved by the following. η 2 = 1 − | q 2 | | q 1 | → | w 2 | = | q 1 | − | q 2 | , η 3 = 1 − | q 3 | | q 2 ∗ | → | w 3 | = | q 2 ∗ | − | q 3 | , | w 2 | + | w 3 | = ( | q 1 | − | q 2 | ) + ( | q 2 ∗ | − | q 3 | ) , η 1 = 1 − | q 3 | | q 1 | = ( | w 2 | + | w 3 | ) | q 1 | = ( | q 1 | − | q 2 | ) + ( | q 2 ∗ | − | q 3 | ) | q 1 | . {\displaystyle {\begin{aligned}&{{\eta }_{2}}=1-{\frac {|{{q}_{2}}|}{|{{q}_{1}}|}}\to |{{w}_{2}}|=|{{q}_{1}}|-|{{q}_{2}}|,\\&{{\eta }_{3}}=1-{\frac {|{{q}_{3}}|}{|{{q}_{2}}^{*}|}}\to |{{w}_{3}}|=|{{q}_{2}}^{*}|-|{{q}_{3}}|,\\&|{{w}_{2}}|+|{{w}_{3}}|=(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|),\\&{{\eta }_{1}}=1-{\frac {|{{q}_{3}}|}{|{{q}_{1}}|}}={\frac {(|{{w}_{2}}|+|{{w}_{3}}|)}{|{{q}_{1}}|}}={\frac {(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|)}{|{{q}_{1}}|}}.\\\end{aligned}}} In order to have the consistency in the last equation, the heat q 2 {\displaystyle q_{2}} flown from the engine 2 to the intermediate reservoir must be equal to the heat q 2 ∗ {\displaystyle q_{2}^{*}} flown out from the reservoir to the engine 3. Then f ( T 1 , T 3 ) = | q 3 | | q 1 | = | q 2 | | q 3 | | q 1 | | q 2 | = f ( T 1 , T 2 ) f ( T 2 , T 3 ) . {\displaystyle f(T_{1},T_{3})={\frac {|q_{3}|}{|q_{1}|}}={\frac {|q_{2}||q_{3}|}{|q_{1}||q_{2}|}}=f(T_{1},T_{2})f(T_{2},T_{3}).} Now consider the case where T 1 {\displaystyle T_{1}} is a fixed reference temperature: the temperature of the triple point of water as 273.16 K; T 1 = 273.16 K {\displaystyle T_{1}=\mathrm {273.16~K} } . Then for any T2 and T3, f ( T 2 , T 3 ) = f ( T 1 , T 3 ) f ( T 1 , T 2 ) = 273.16 K ⋅ f ( T 1 , T 3 ) 273.16 K ⋅ f ( T 1 , T 2 ) . {\displaystyle f(T_{2},T_{3})={\frac {f(T_{1},T_{3})}{f(T_{1},T_{2})}}={\frac {273.16{\text{ K}}\cdot f(T_{1},T_{3})}{273.16{\text{ K}}\cdot f(T_{1},T_{2})}}.} Therefore, if thermodynamic temperature T* is defined by T ∗ = 273.16 K ⋅ f ( T 1 , T ) {\displaystyle T^{*}=273.16{\text{ K}}\cdot f(T_{1},T)} then the function f, viewed as a function of thermodynamic temperatures, is simply f ( T 2 , T 3 ) = f ( T 2 ∗ , T 3 ∗ ) = T 3 ∗ T 2 ∗ , {\displaystyle f(T_{2},T_{3})=f(T_{2}^{*},T_{3}^{*})={\frac {T_{3}^{*}}{T_{2}^{*}}},} and the reference temperature T1* = 273.16 K × f(T1,T1) = 273.16 K. (Any reference temperature and any positive numerical value could be used – the choice here corresponds to the Kelvin scale.) === Entropy === According to the Clausius equality, for a reversible process ∮ δ Q T = 0 {\displaystyle \oint {\frac {\delta Q}{T}}=0} That means the line integral ∫ L δ Q T {\displaystyle \int _{L}{\frac {\delta Q}{T}}} is path independent for reversible processes. So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies d S = δ Q T {\displaystyle dS={\frac {\delta Q}{T}}} With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals. For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy. Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop, with Tsurr as the temperature of the surroundings, − Δ S + ∫ δ Q T surr = ∮ δ Q T surr ≤ 0 {\displaystyle -\Delta S+\int {\frac {\delta Q}{T_{\text{surr}}}}=\oint {\frac {\delta Q}{T_{\text{surr}}}}\leq 0} Thus, Δ S ≥ ∫ δ Q T surr {\displaystyle \Delta S\geq \int {\frac {\delta Q}{T_{\text{surr}}}}} where the equality holds if the transformation is reversible. If the process is an adiabatic process, then δ Q = 0 {\displaystyle \delta Q=0} , so Δ S ≥ 0 {\displaystyle \Delta S\geq 0} . === Energy, available useful work === An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an unlimited heat reservoir at temperature TR and pressure PR – so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain TR; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain PR. Whatever changes to dS and dSR occur in the entropies of the sub-system and the surroundings individually, the entropy Stot of the isolated total system must not decrease according to the second law of thermodynamics: d S t o t = d S + d S R ≥ 0 {\displaystyle dS_{\mathrm {tot} }=dS+dS_{\text{R}}\geq 0} According to the first law of thermodynamics, the change dU in the internal energy of the sub-system is the sum of the heat δq added to the sub-system, minus any work δw done by the sub-system, plus any net chemical energy entering the sub-system d ΣμiRNi, so that: d U = δ q − δ w + d ( ∑ μ i R N i ) {\displaystyle dU=\delta q-\delta w+d\left(\sum \mu _{iR}N_{i}\right)} where μiR are the chemical potentials of chemical species in the external surroundings. Now the heat leaving the reservoir and entering the sub-system is δ q = T R ( − d S R ) ≤ T R d S {\displaystyle \delta q=T_{\text{R}}(-dS_{\text{R}})\leq T_{\text{R}}dS} where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the second law inequality from above. It therefore follows that any net work δw done by the sub-system must obey δ w ≤ − d U + T R d S + ∑ μ i R d N i {\displaystyle \delta w\leq -dU+T_{\text{R}}dS+\sum \mu _{iR}dN_{i}} It is useful to separate the work δw done by the subsystem into the useful work δwu that can be done by the sub-system, over and beyond the work pR dV done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done: δ w u ≤ − d ( U − T R S + p R V − ∑ μ i R N i ) {\displaystyle \delta w_{u}\leq -d\left(U-T_{\text{R}}S+p_{\text{R}}V-\sum \mu _{iR}N_{i}\right)} It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the availability or exergy E of the subsystem, E = U − T R S + p R V − ∑ μ i R N i {\displaystyle E=U-T_{\text{R}}S+p_{\text{R}}V-\sum \mu _{iR}N_{i}} The second law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact, d E + δ w u ≤ 0 {\displaystyle dE+\delta w_{u}\leq 0} i.e. the change in the subsystem's exergy plus the useful work done by the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done on the system) must be less than or equal to zero. In sum, if a proper infinite-reservoir-like reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in E for an irreversible process and no change for a reversible process. d S tot ≥ 0 {\displaystyle dS_{\text{tot}}\geq 0} is equivalent to d E + δ w u ≤ 0 {\displaystyle dE+\delta w_{u}\leq 0} This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the second law without directly measuring or considering entropy change in a total isolated system (see also Process engineer). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (see Exergy efficiency). This approach to the second law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines. == Direction of spontaneous processes == The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For isolated systems, no energy is provided by the surroundings and the second law requires that the entropy of the system alone cannot decrease: ΔS ≥ 0. Examples of spontaneous physical processes in isolated systems include the following: 1) Heat can be transferred from a region of higher temperature to a lower temperature (but not the reverse). 2) Mechanical energy can be converted to thermal energy (but not the reverse). 3) A solute can move from a region of higher concentration to a region of lower concentration (but not the reverse). However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. In such a case, the reverse process can occur because it is coupled to a simultaneous process that increases the entropy of the surroundings. The coupled process will go forward provided that the total entropy change of the system and surroundings combined is nonnegative as required by the second law: ΔStot = ΔS + ΔSR ≥ 0. For the three examples given above: 1) Heat can be transferred from a region of lower temperature to a higher temperature by a refrigerator or heat pump, provided that the device delivers sufficient mechanical work to the system and converts it to thermal energy inside the system. 2) Thermal energy can be converted by a heat engine to mechanical work within a system at a single temperature, provided that the heat engine transfers a sufficient amount of heat from the system to a lower-temperature region in the surroundings. 3) A solute can travel from a region of lower concentration to a region of higher concentration in the biochemical process of active transport, if sufficient work is provided by a concentration gradient of a chemical such as ATP or by an electrochemical gradient. === Second law in chemical thermodynamics === For a spontaneous chemical process in a closed system at constant temperature and pressure without non-PV work, the Clausius inequality ΔS > Q/Tsurr transforms into a condition for the change in Gibbs free energy Δ G < 0 {\displaystyle \Delta G<0} or dG < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, Δ A < 0 {\displaystyle \Delta A<0} . Thus, a negative value of the change in free energy (G or A) is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant T and p without electrical work is dG = 0. == History == The first theory of the conversion of heat into mechanical work is due to Nicolas Léonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings. Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow spontaneously from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865). Established during the 19th century, the Kelvin-Planck statement of the second law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This statement was shown to be equivalent to the statement of Clausius. The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same. There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system. This doctrine is obsolescent. === Account given by Clausius === In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form: ∫ δ Q T = − N {\displaystyle \int {\frac {\delta Q}{T}}=-N} where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes: The entropy of the universe tends to a maximum. This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description. In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is: d S d t ≥ 0 {\displaystyle {\frac {dS}{dt}}\geq 0} where S is the entropy of the system and t is time. The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is: d S d t = S ˙ i {\displaystyle {\frac {dS}{dt}}={\dot {S}}_{\text{i}}} with S ˙ i ≥ 0 {\displaystyle {\dot {S}}_{\text{i}}\geq 0} with S ˙ i {\displaystyle {\dot {S}}_{\text{i}}} the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature T a {\displaystyle T_{\text{a}}} it gives the so-called dissipated energy P diss = T a S ˙ i {\displaystyle P_{\text{diss}}=T_{\text{a}}{\dot {S}}_{\text{i}}} . The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: d S d t = Q ˙ T + S ˙ i {\displaystyle {\frac {dS}{dt}}={\frac {\dot {Q}}{T}}+{\dot {S}}_{\text{i}}} with S ˙ i ≥ 0 {\displaystyle {\dot {S}}_{\text{i}}\geq 0} Here, Q ˙ {\displaystyle {\dot {Q}}} is the heat flow into the system T {\displaystyle T} is the temperature at the point where the heat enters the system. The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. For open systems (also allowing exchange of matter): d S d t = Q ˙ T + S ˙ + S ˙ i {\displaystyle {\frac {dS}{dt}}={\frac {\dot {Q}}{T}}+{\dot {S}}+{\dot {S}}_{\text{i}}} with S ˙ i ≥ 0 {\displaystyle {\dot {S}}_{\text{i}}\geq 0} Here, S ˙ {\displaystyle {\dot {S}}} is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. == Statistical mechanics == Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/√N where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations. == Derivation from statistical mechanics == The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution. Due to Loschmidt's paradox, derivations of the second law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested. Given these assumptions, in statistical mechanics, the second law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of E {\displaystyle E} is: S = k B ln ⁡ [ Ω ( E ) ] {\displaystyle S=k_{\mathrm {B} }\ln \left[\Omega \left(E\right)\right]} where Ω ( E ) {\displaystyle \Omega \left(E\right)} is the number of quantum states in a small interval between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Here δ E {\displaystyle \delta E} is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of δ E {\displaystyle \delta E} . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on δ E {\displaystyle \delta E} . Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then Ω {\displaystyle \Omega } will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that Ω {\displaystyle \Omega } is maximized at the given energy of the isolated system as that is the most probable situation in equilibrium. If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that Ω {\displaystyle \Omega } is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number Ω {\displaystyle \Omega } of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of 1 / Ω {\displaystyle 1/\Omega } . We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity H increases monotonically as a function of time during the intermediate out of equilibrium state. === Derivation of the entropy change for reversible processes === The second part of the second law states that the entropy change of a system undergoing a reversible process is given by: d S = δ Q T {\displaystyle dS={\frac {\delta Q}{T}}} where the temperature is defined as: 1 k B T ≡ β ≡ d ln ⁡ [ Ω ( E ) ] d E {\displaystyle {\frac {1}{k_{\mathrm {B} }T}}\equiv \beta \equiv {\frac {d\ln \left[\Omega \left(E\right)\right]}{dE}}} See Microcanonical ensemble for the justification for this definition. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, X, corresponding to the external variable x is defined such that X d x {\displaystyle Xdx} is the work performed by the system if x is increased by an amount dx. For example, if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate E r {\displaystyle E_{r}} is given by: X = − d E r d x {\displaystyle X=-{\frac {dE_{r}}{dx}}} Since the system can be in any energy eigenstate within an interval of δ E {\displaystyle \delta E} , we define the generalized force for the system as the expectation value of the above expression: X = − ⟨ d E r d x ⟩ {\displaystyle X=-\left\langle {\frac {dE_{r}}{dx}}\right\rangle \,} To evaluate the average, we partition the Ω ( E ) {\displaystyle \Omega \left(E\right)} energy eigenstates by counting how many of them have a value for d E r d x {\displaystyle {\frac {dE_{r}}{dx}}} within a range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Calling this number Ω Y ( E ) {\displaystyle \Omega _{Y}\left(E\right)} , we have: Ω ( E ) = ∑ Y Ω Y ( E ) {\displaystyle \Omega \left(E\right)=\sum _{Y}\Omega _{Y}\left(E\right)\,} The average defining the generalized force can now be written: X = − 1 Ω ( E ) ∑ Y Y Ω Y ( E ) {\displaystyle X=-{\frac {1}{\Omega \left(E\right)}}\sum _{Y}Y\Omega _{Y}\left(E\right)\,} We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then Ω ( E ) {\displaystyle \Omega \left(E\right)} will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Let's focus again on the energy eigenstates for which d E r d x {\textstyle {\frac {dE_{r}}{dx}}} lies within the range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E – Y dx to E move from below E to above E. There are N Y ( E ) = Ω Y ( E ) δ E Y d x {\displaystyle N_{Y}\left(E\right)={\frac {\Omega _{Y}\left(E\right)}{\delta E}}Ydx\,} such energy eigenstates. If Y d x ≤ δ E {\displaystyle Ydx\leq \delta E} , all these energy eigenstates will move into the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} and contribute to an increase in Ω {\displaystyle \Omega } . The number of energy eigenstates that move from below E + δ E {\displaystyle E+\delta E} to above E + δ E {\displaystyle E+\delta E} is given by N Y ( E + δ E ) {\displaystyle N_{Y}\left(E+\delta E\right)} . The difference N Y ( E ) − N Y ( E + δ E ) {\displaystyle N_{Y}\left(E\right)-N_{Y}\left(E+\delta E\right)\,} is thus the net contribution to the increase in Ω {\displaystyle \Omega } . If Y dx is larger than δ E {\displaystyle \delta E} there will be the energy eigenstates that move from below E to above E + δ E {\displaystyle E+\delta E} . They are counted in both N Y ( E ) {\displaystyle N_{Y}\left(E\right)} and N Y ( E + δ E ) {\displaystyle N_{Y}\left(E+\delta E\right)} , therefore the above expression is also valid in that case. Expressing the above expression as a derivative with respect to E and summing over Y yields the expression: ( ∂ Ω ∂ x ) E = − ∑ Y Y ( ∂ Ω Y ∂ E ) x = ( ∂ ( Ω X ) ∂ E ) x {\displaystyle \left({\frac {\partial \Omega }{\partial x}}\right)_{E}=-\sum _{Y}Y\left({\frac {\partial \Omega _{Y}}{\partial E}}\right)_{x}=\left({\frac {\partial \left(\Omega X\right)}{\partial E}}\right)_{x}\,} The logarithmic derivative of Ω {\displaystyle \Omega } with respect to x is thus given by: ( ∂ ln ⁡ ( Ω ) ∂ x ) E = β X + ( ∂ X ∂ E ) x {\displaystyle \left({\frac {\partial \ln \left(\Omega \right)}{\partial x}}\right)_{E}=\beta X+\left({\frac {\partial X}{\partial E}}\right)_{x}\,} The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that: ( ∂ S ∂ x ) E = X T {\displaystyle \left({\frac {\partial S}{\partial x}}\right)_{E}={\frac {X}{T}}\,} Combining this with ( ∂ S ∂ E ) x = 1 T {\displaystyle \left({\frac {\partial S}{\partial E}}\right)_{x}={\frac {1}{T}}\,} gives: d S = ( ∂ S ∂ E ) x d E + ( ∂ S ∂ x ) E d x = d E T + X T d x = δ Q T {\displaystyle dS=\left({\frac {\partial S}{\partial E}}\right)_{x}dE+\left({\frac {\partial S}{\partial x}}\right)_{E}dx={\frac {dE}{T}}+{\frac {X}{T}}dx={\frac {\delta Q}{T}}\,} === Derivation for systems described by the canonical ensemble === If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble: P j = exp ⁡ ( − E j k B T ) Z {\displaystyle P_{j}={\frac {\exp \left(-{\frac {E_{j}}{k_{\mathrm {B} }T}}\right)}{Z}}} Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy: S = − k B ∑ j P j ln ⁡ ( P j ) {\displaystyle S=-k_{\mathrm {B} }\sum _{j}P_{j}\ln \left(P_{j}\right)} that d S = − k B ∑ j ln ⁡ ( P j ) d P j {\displaystyle dS=-k_{\mathrm {B} }\sum _{j}\ln \left(P_{j}\right)dP_{j}} Inserting the formula for P j {\displaystyle P_{j}} for the canonical ensemble in here gives: d S = 1 T ∑ j E j d P j = 1 T ∑ j d ( E j P j ) − 1 T ∑ j P j d E j = d E + δ W T = δ Q T {\displaystyle dS={\frac {1}{T}}\sum _{j}E_{j}dP_{j}={\frac {1}{T}}\sum _{j}d\left(E_{j}P_{j}\right)-{\frac {1}{T}}\sum _{j}P_{j}dE_{j}={\frac {dE+\delta W}{T}}={\frac {\delta Q}{T}}} === Initial conditions at the Big Bang === As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seems to have been extremely uniform. This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size. As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions. == Living organisms == There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, including Erwin Schrödinger (in his book What is Life?) and Léon Brillouin. To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of metabolism, give out breakdown products and heat. Plants take in radiative energy from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes. Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs. Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are a subject of ongoing research. == Gravitational systems == Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time. This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter. When the entropy of the black-body radiation emitted by the bodies is included, however, the total entropy of the system can be shown to increase even as the entropy of the planet or star decreases. == Non-equilibrium states == The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium. For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium. Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable fluctuations. There is an exception, the case of critical states, which exhibit to the naked eye the phenomenon of critical opalescence. For laboratory studies of critical states, exceptionally long observation times are needed. In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system. It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states. There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal. For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law. The physics of macroscopically observable fluctuations is beyond the scope of this article. == Arrow of time == The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality). == Irreversibility == Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this. === Loschmidt's paradox === Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system. In the opinion of Schrödinger, "It is now quite obvious in what manner you have to reformulate the law of entropy – or for that matter, all other irreversible statements – so that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other." The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it. This reaffirms Albert Einstein's postulates that cornerstone Special and General Relativity - that the flow of time is irreversible, however it is relative. Cause must precede effect, but only within the constraints as defined explicitly within General Relativity (or Special Relativity, depending on the local spacetime conditions). Good examples of this are the Ladder Paradox, time dilation and length contraction exhibited by objects approaching the velocity of light or within proximity of a super-dense region of mass/energy - e.g. black holes, neutron stars, magnetars and quasars. === Poincaré recurrence theorem === The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal. === Maxwell's demon === James Clerk Maxwell imagined one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed. Maxwell's 'demon' repeatedly alters the permeability of the wall between A and B. It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes. == Quotations == The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. There have been nearly as many formulations of the second law as there have been discussions of it. Clausius is the author of the sibyllic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size. == See also == == References == === Sources === == Further reading == Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. Chpts. 4–9 contain an introduction to the second law, one a bit less technical than this entry. ISBN 978-0-674-75324-2 Leff, Harvey S., and Rex, Andrew F. (eds.) 2003. Maxwell's Demon 2 : Entropy, classical and quantum information, computing. Bristol UK; Philadelphia PA: Institute of Physics. ISBN 978-0-585-49237-7 Halliwell, J.J. (1994). Physical Origins of Time Asymmetry. Cambridge. ISBN 978-0-521-56837-1.(technical). Carnot, Sadi (1890). Thurston, Robert Henry (ed.). Reflections on the Motive Power of Heat and on Machines Fitted to Develop That Power. New York: J. Wiley & Sons. (full text of 1897 ed.) (html Archived 2007-08-18 at the Wayback Machine) Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Cañada, CA: DCW Industries. ISBN 1-928729-01-0. Kostic, M (2011). Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization. AIP Conference Proceedings. Vol. 1411. pp. 327–350. Bibcode:2011AIPC.1411..327K. CiteSeerX 10.1.1.405.1945. doi:10.1063/1.3665247. ISBN 978-0-7354-0985-9. {{cite book}}: |journal= ignored (help) also at [1]. == External links == Stanford Encyclopedia of Philosophy: "Philosophy of Statistical Mechanics" – by Lawrence Sklar. Second law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky E.T. Jaynes, 1988, "The evolution of Carnot's principle," in G. J. Erickson and C. R. Smith (eds.)Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol,.1: p. 267. Caratheodory, C., "Examination of the foundations of thermodynamics," trans. by D. H. Delphenich The second law of Thermodynamics, BBC Radio 4 discussion with John Gribbin, Peter Atkins & Monica Grady (In Our Time, December 16, 2004) The Journal of the International Society for the History of Philosophy of Science, 2012
Wikipedia/Second_law_of_thermodynamics
Tension is the pulling or stretching force transmitted axially along an object such as a string, rope, chain, rod, truss member, or other object, so as to stretch or pull apart the object. In terms of force, it is the opposite of compression. Tension might also be described as the action-reaction pair of forces acting at each end of an object. At the atomic level, when atoms or molecules are pulled apart from each other and gain potential energy with a restoring force still existing, the restoring force might create what is also called tension. Each end of a string or rod under such tension could pull on the object it is attached to, in order to restore the string/rod to its relaxed length. Tension (as a transmitted force, as an action-reaction pair of forces, or as a restoring force) is measured in newtons in the International System of Units (or pounds-force in Imperial units). The ends of a string or other object transmitting tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are also called "passive forces". There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration, and therefore a net force is present in the system. == Tension in one dimension == Tension in a string is a non-negative vector quantity. Zero tension is slack. A string or rope is often idealized as one dimension, having fixed length but being massless with zero cross section. If there are no bends in the string, as occur with vibrations or pulleys, then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's third law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have constant tension along its length in the idealized situation that the pulleys are massless and frictionless. A vibrating string vibrates with a set of frequencies that depend on the string's tension. These frequencies can be derived from Newton's laws of motion. Each microscopic segment of the string pulls on and is pulled upon by its neighboring segments, with a force equal to the tension at that position along the string. If the string has curvature, then the two pulls on a segment by its two neighbors will not add to zero, and there will be a net force on that segment of the string, causing an acceleration. This net force is a restoring force, and the motion of the string can include transverse waves that solve the equation central to Sturm–Liouville theory: − d d x [ τ ( x ) d ρ ( x ) d x ] + v ( x ) ρ ( x ) = ω 2 σ ( x ) ρ ( x ) {\displaystyle -{\frac {\mathrm {d} }{\mathrm {d} x}}{\bigg [}\tau (x){\frac {\mathrm {d} \rho (x)}{\mathrm {d} x}}{\bigg ]}+v(x)\rho (x)=\omega ^{2}\sigma (x)\rho (x)} where v ( x ) {\displaystyle v(x)} is the force constant per unit length [units force per area] and ω 2 {\displaystyle \omega ^{2}} are the eigenvalues for resonances of transverse displacement ρ ( x ) {\displaystyle \rho (x)} on the string, with solutions that include the various harmonics on a stringed instrument. == Tension of three dimensions == Tension is also used to describe the force exerted by the ends of a three-dimensional, continuous material such as a rod or truss member. In this context, tension is analogous to negative pressure. A rod under tension elongates. The amount of elongation and the load that will cause failure both depend on the force per cross-sectional area rather than the force alone, so stress = axial force / cross sectional area is more useful for engineering purposes than tension. Stress is a 3x3 matrix called a tensor, and the σ 11 {\displaystyle \sigma _{11}} element of the stress tensor is tensile force per area, or compression force per area, denoted as a negative number for this element, if the rod is being compressed rather than elongated. Thus, one can obtain a scalar analogous to tension by taking the trace of the stress tensor. == System in equilibrium == A system is in equilibrium when the sum of all forces is zero. ∑ F → = 0 {\displaystyle \sum {\vec {F}}=0} For example, consider a system consisting of an object that is being lowered vertically by a string with tension, T, at a constant velocity. The system has a constant velocity and is therefore in equilibrium because the tension in the string, which is pulling up on the object, is equal to the weight force, mg ("m" is mass, "g" is the acceleration caused by the gravity of Earth), which is pulling down on the object. ∑ F → = T → + m g → = 0 {\displaystyle \sum {\vec {F}}={\vec {T}}+m{\vec {g}}=0} == System under net force == A system has a net force when an unbalanced force is exerted on it, in other words the sum of all forces is not zero. Acceleration and net force always exist together. ∑ F → ≠ 0 {\displaystyle \sum {\vec {F}}\neq 0} For example, consider the same system as above but suppose the object is now being lowered with an increasing velocity downwards (positive acceleration) therefore there exists a net force somewhere in the system. In this case, negative acceleration would indicate that | m g | > | T | {\displaystyle |mg|>|T|} . ∑ F → = T → − m g → ≠ 0 {\displaystyle \sum {\vec {F}}={\vec {T}}-m{\vec {g}}\neq 0} In another example, suppose that two bodies A and B having masses m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} , respectively, are connected with each other by an inextensible string over a frictionless pulley. There are two forces acting on the body A: its weight ( w 1 = m 1 g {\displaystyle w_{1}=m_{1}g} ) pulling down, and the tension T {\displaystyle T} in the string pulling up. Therefore, the net force F 1 {\displaystyle F_{1}} on body A is w 1 − T {\displaystyle w_{1}-T} , so m 1 a = m 1 g − T {\displaystyle m_{1}a=m_{1}g-T} . In an extensible string, Hooke's law applies. == Strings in modern physics == String-like objects in relativistic theories, such as the strings used in some models of interactions between quarks, or those used in the modern string theory, also possess tension. These strings are analyzed in terms of their world sheet, and the energy is then typically proportional to the length of the string. As a result, the tension in such strings is independent of the amount of stretching. == See also == Continuum mechanics Fall factor Surface tension Tensile strength Traction (mechanics) Hydrostatic pressure == References ==
Wikipedia/Tension_(physics)
In physics, a Unified Field Theory (UFT) or “Theory of Everything” is a type of field theory that allows all fundamental forces of nature, including gravity, and all elementary particles to be written in terms of a single physical field. According to quantum field theory, particles are themselves the quanta of fields. Different fields in physics include vector fields such as the electromagnetic field, spinor fields whose quanta are fermionic particles such as electrons, and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity. Unified field theories attempt to organize these fields into a single mathematical structure. For over a century, the unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. Einstein attempted to create a classical unified field theory. Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a "Theory of Everything" or Grand Unified Theory are closely related to unified field theory. A theory of everything attempts to create a complete picture of all events in nature. Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory. The goal of a unified field theory has led to significant progress in theoretical physics. == Introduction == Unified field theory attempts to give a single elegant description of the following fields: === Forces === All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons. These are: Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon. Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force. Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons. General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime: Gravitational interaction: a long-range attractive interaction that acts on all particles. In hypothetical quantum versions of GR, the postulated exchange particle has been named the graviton. === Matter === In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field. === Higgs === The Standard Model has a unique fundamental scalar field, the Higgs field, the quanta of which are called Higgs bosons. == History == === Classic theory === The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime. In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories. === Modern progress === In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and 91.2 GeV/c2, respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984. After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV. Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory. Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime. === Current status === Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics. == See also == Sheldon Glashow Unification (physics) == References == == Further reading == Jeroen van Dongen Einstein's Unification, Cambridge University Press (July 26, 2010) Varadarajan, V.S. Supersymmetry for Mathematicians: An Introduction (Courant Lecture Notes), American Mathematical Society (July 2004) == External links == On the History of Unified Field Theories, by Hubert F. M. Goenner
Wikipedia/Unified_field_theory
As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction. == Examples == === Interaction with ground === When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'. When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force. Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle. === Gravitational forces === The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravitational pull on the Sun. Earth's pull has the same amplitude as the Sun but in the opposite direction. Since the Sun's mass is so much larger than Earth's, the Sun does not generally appear to react to the pull of Earth, but in fact it does, as demonstrated in the animation (not to precise scale). A correct way of describing the combined motion of both objects (ignoring all other celestial bodies for the moment) is to say that they both orbit around the center of mass, referred to in astronomy as the barycenter, of the combined system. === Supported mass === Any mass on earth is pulled down by the gravitational force of the earth; this force is also called its weight. The corresponding 'reaction' is the gravitational force that mass exerts on the planet. If the object is supported so that it remains at rest, for instance by a cable from which it is hanging, or by a surface underneath, or by a liquid on which it is floating, there is also a support force in upward direction (tension force, normal force, buoyant force, respectively). This support force is an 'equal and opposite' force; we know this not because of Newton's third law, but because the object remains at rest, so that the forces must be balanced. To this support force there is also a 'reaction': the object pulls down on the supporting cable, or pushes down on the supporting surface or liquid. In this case, there are therefore four forces of equal magnitude: F1. gravitational force by earth on object (downward) F2. gravitational force by object on earth (upward) F3. force by support on object (upward) F4. force by object on support (downward) Forces F1 and F2 are equal, due to Newton's third law; the same is true for forces F3 and F4. Forces F1 and F3 are equal if and only if the object is in equilibrium, and no other forces are applied. (This has nothing to do with Newton's third law.) === Mass on a spring === If a mass is hanging from a spring, the same considerations apply as before. However, if this system is then perturbed (e.g., the mass is given a slight kick upwards or downwards, say), the mass starts to oscillate up and down. Because of these accelerations (and subsequent decelerations), we conclude from Newton's second law that a net force is responsible for the observed change in velocity. The gravitational force pulling down on the mass is no longer equal to the upward elastic force of the spring. In the terminology of the previous section, F1 and F3 are no longer equal. However, it is still true that F1 = F2 and F3 = F4, as this is required by Newton's third law. == Causal misinterpretation == The terms 'action' and 'reaction' have the misleading suggestion of causality, as if the 'action' is the cause and 'reaction' is the effect. It is therefore easy to think of the second force as being there because of the first, and even happening some time after the first. This is incorrect; the forces are perfectly simultaneous, and are there for the same reason. When the forces are caused by a person's volition (e.g. a soccer player kicks a ball), this volitional cause often leads to an asymmetric interpretation, where the force by the player on the ball is considered the 'action' and the force by the ball on the player, the 'reaction'. But physically, the situation is symmetric. The forces on ball and player are both explained by their nearness, which results in a pair of contact forces (ultimately due to electric repulsion). That this nearness is caused by a decision of the player has no bearing on the physical analysis. As far as the physics is concerned, the labels 'action' and 'reaction' can be flipped. === 'Equal and opposite' === One problem frequently observed by physics educators is that students tend to apply Newton's third law to pairs of 'equal and opposite' forces acting on the same object. This is incorrect; the third law refers to forces on two different objects. In contrast, a book lying on a table is subject to a downward gravitational force (exerted by the earth) and to an upward normal force by the table, both forces acting on the same book. Since the book is not accelerating, these forces must be exactly balanced, according to Newton's second law. They are therefore 'equal and opposite', yet they are acting on the same object, hence they are not action-reaction forces in the sense of Newton's third law. The actual action-reaction forces in the sense of Newton's third law are the weight of the book (the attraction of the Earth on the book) and the book's upward gravitational force on the earth. The book also pushes down on the table and the table pushes upwards on the book. Moreover, the forces acting on the book are not always equally strong; they will be different if the book is pushed down by a third force, or if the table is slanted, or if the table-and-book system is in an accelerating elevator. The case of any number of forces acting on the same object is covered by considering the sum of all forces. A possible cause of this problem is that the third law is often stated in an abbreviated form: For every action there is an equal and opposite reaction, without the details, namely that these forces act on two different objects. Moreover, there is a causal connection between the weight of something and the normal force: if an object had no weight, it would not experience support force from the table, and the weight dictates how strong the support force will be. This causal relationship is not due to the third law but to other physical relations in the system. === Centripetal and centrifugal force === Another common mistake is to state that "the centrifugal force that an object experiences is the reaction to the centripetal force on that object." If an object were simultaneously subject to both a centripetal force and an equal and opposite centrifugal force, the resultant force would vanish and the object could not experience a circular motion. The centrifugal force is sometimes called a fictitious force or pseudo force, to underscore the fact that such a force only appears when calculations or measurements are conducted in non-inertial reference frames. == See also == Ground reaction force Reactive centrifugal force Isaac Newton Ibn Bajjah Reaction engine/jet engine Shear force == References == == Bibliography == Feynman, R. P., Leighton and Sands (1970) The Feynman Lectures on Physics, Volume 1, Addison Wesley Longman, ISBN 0-201-02115-3. Resnick, R. and D. Halliday (1966) Physics, Part 1, John Wiley & Sons, New York, 646 pp + Appendices. Warren, J. W. (1965) The Teaching of Physics, Butterworths, London,130 pp.
Wikipedia/Reaction_(physics)
In differential geometry, the radius of curvature, R, is the reciprocal of the curvature. For a curve, it equals the radius of the circular arc which best approximates the curve at that point. For surfaces, the radius of curvature is the radius of a circle that best fits a normal section or combinations thereof. == Definition == In the case of a space curve, the radius of curvature is the length of the curvature vector. In the case of a plane curve, then R is the absolute value of R ≡ | d s d φ | = 1 κ , {\displaystyle R\equiv \left|{\frac {ds}{d\varphi }}\right|={\frac {1}{\kappa }},} where s is the arc length from a fixed point on the curve, φ is the tangential angle and κ is the curvature. == Formula == === In two dimensions === If the curve is given in Cartesian coordinates as y(x), i.e., as the graph of a function, then the radius of curvature is (assuming the curve is differentiable up to order 2) R = | ( 1 + y ′ 2 ) 3 2 y ″ | , {\displaystyle R=\left|{\frac {\left(1+y'^{\,2}\right)^{\frac {3}{2}}}{y''}}\right|\,,} where y ′ = d y d x , {\textstyle y'={\frac {dy}{dx}}\,,} y ″ = d 2 y d x 2 , {\textstyle y''={\frac {d^{2}y}{dx^{2}}},} and |z| denotes the absolute value of z. If the curve is given parametrically by functions x(t) and y(t), then the radius of curvature is R = | d s d φ | = | ( x ˙ 2 + y ˙ 2 ) 3 2 x ˙ y ¨ − y ˙ x ¨ | {\displaystyle R=\left|{\frac {ds}{d\varphi }}\right|=\left|{\frac {\left({{\dot {x}}^{2}+{\dot {y}}^{2}}\right)^{\frac {3}{2}}}{{\dot {x}}{\ddot {y}}-{\dot {y}}{\ddot {x}}}}\right|} where x ˙ = d x d t , {\textstyle {\dot {x}}={\frac {dx}{dt}},} x ¨ = d 2 x d t 2 , {\textstyle {\ddot {x}}={\frac {d^{2}x}{dt^{2}}},} y ˙ = d y d t , {\textstyle {\dot {y}}={\frac {dy}{dt}},} and y ¨ = d 2 y d t 2 . {\textstyle {\ddot {y}}={\frac {d^{2}y}{dt^{2}}}.} Heuristically, this result can be interpreted as R = | v | 3 | v × v ˙ | , {\displaystyle R={\frac {\left|\mathbf {v} \right|^{3}}{\left|\mathbf {v} \times \mathbf {\dot {v}} \right|}}\,,} where | v | = | ( x ˙ , y ˙ ) | = R d φ d t . {\displaystyle \left|\mathbf {v} \right|={\big |}({\dot {x}},{\dot {y}}){\big |}=R{\frac {d\varphi }{dt}}\,.} === In n dimensions === If γ : ℝ → ℝn is a parametrized curve in ℝn then the radius of curvature at each point of the curve, ρ : ℝ → ℝ, is given by ρ = | γ ′ | 3 | γ ′ | 2 | γ ″ | 2 − ( γ ′ ⋅ γ ″ ) 2 . {\displaystyle \rho ={\frac {\left|{\boldsymbol {\gamma }}'\right|^{3}}{\sqrt {\left|{\boldsymbol {\gamma }}'\right|^{2}\,\left|{\boldsymbol {\gamma }}''\right|^{2}-\left({\boldsymbol {\gamma }}'\cdot {\boldsymbol {\gamma }}''\right)^{2}}}}\,.} As a special case, if f(t) is a function from ℝ to ℝ, then the radius of curvature of its graph, γ(t) = (t, f (t)), is ρ ( t ) = | 1 + f ′ 2 ( t ) | 3 2 | f ″ ( t ) | . {\displaystyle \rho (t)={\frac {\left|1+f'^{\,2}(t)\right|^{\frac {3}{2}}}{\left|f''(t)\right|}}.} === Derivation === Let γ be as above, and fix t. We want to find the radius ρ of a parametrized circle which matches γ in its zeroth, first, and second derivatives at t. Clearly the radius will not depend on the position γ(t), only on the velocity γ′(t) and acceleration γ″(t). There are only three independent scalars that can be obtained from two vectors v and w, namely v · v, v · w, and w · w. Thus the radius of curvature must be a function of the three scalars |γ′(t)|2, |γ″(t)|2 and γ′(t) · γ″(t). The general equation for a parametrized circle in ℝn is g ( u ) = a cos ⁡ ( h ( u ) ) + b sin ⁡ ( h ( u ) ) + c {\displaystyle \mathbf {g} (u)=\mathbf {a} \cos(h(u))+\mathbf {b} \sin(h(u))+\mathbf {c} } where c ∈ ℝn is the center of the circle (irrelevant since it disappears in the derivatives), a,b ∈ ℝn are perpendicular vectors of length ρ (that is, a · a = b · b = ρ2 and a · b = 0), and h : ℝ → ℝ is an arbitrary function which is twice differentiable at t. The relevant derivatives of g work out to be | g ′ | 2 = ρ 2 ( h ′ ) 2 g ′ ⋅ g ″ = ρ 2 h ′ h ″ | g ″ | 2 = ρ 2 ( ( h ′ ) 4 + ( h ″ ) 2 ) {\displaystyle {\begin{aligned}|\mathbf {g} '|^{2}&=\rho ^{2}(h')^{2}\\\mathbf {g} '\cdot \mathbf {g} ''&=\rho ^{2}h'h''\\|\mathbf {g} ''|^{2}&=\rho ^{2}\left((h')^{4}+(h'')^{2}\right)\end{aligned}}} If we now equate these derivatives of g to the corresponding derivatives of γ at t we obtain | γ ′ ( t ) | 2 = ρ 2 h ′ 2 ( t ) γ ′ ( t ) ⋅ γ ″ ( t ) = ρ 2 h ′ ( t ) h ″ ( t ) | γ ″ ( t ) | 2 = ρ 2 ( h ′ 4 ( t ) + h ″ 2 ( t ) ) {\displaystyle {\begin{aligned}|{\boldsymbol {\gamma }}'(t)|^{2}&=\rho ^{2}h'^{\,2}(t)\\{\boldsymbol {\gamma }}'(t)\cdot {\boldsymbol {\gamma }}''(t)&=\rho ^{2}h'(t)h''(t)\\|{\boldsymbol {\gamma }}''(t)|^{2}&=\rho ^{2}\left(h'^{\,4}(t)+h''^{\,2}(t)\right)\end{aligned}}} These three equations in three unknowns (ρ, h′(t) and h″(t)) can be solved for ρ, giving the formula for the radius of curvature: ρ ( t ) = | γ ′ ( t ) | 3 | γ ′ ( t ) | 2 | γ ″ ( t ) | 2 − ( γ ′ ( t ) ⋅ γ ″ ( t ) ) 2 , {\displaystyle \rho (t)={\frac {\left|{\boldsymbol {\gamma }}'(t)\right|^{3}}{\sqrt {\left|{\boldsymbol {\gamma }}'(t)\right|^{2}\,\left|{\boldsymbol {\gamma }}''(t)\right|^{2}-{\big (}{\boldsymbol {\gamma }}'(t)\cdot {\boldsymbol {\gamma }}''(t){\big )}^{2}}}}\,,} or, omitting the parameter t for readability, ρ = | γ ′ | 3 | γ ′ | 2 | γ ″ | 2 − ( γ ′ ⋅ γ ″ ) 2 . {\displaystyle \rho ={\frac {\left|{\boldsymbol {\gamma }}'\right|^{3}}{\sqrt {\left|{\boldsymbol {\gamma }}'\right|^{2}\;\left|{\boldsymbol {\gamma }}''\right|^{2}-\left({\boldsymbol {\gamma }}'\cdot {\boldsymbol {\gamma }}''\right)^{2}}}}\,.} == Examples == === Semicircles and circles === For a semi-circle of radius a in the upper half-plane with R = | − a | = a , {\textstyle R=|-a|=a\,,} y = a 2 − x 2 y ′ = − x a 2 − x 2 y ″ = − a 2 ( a 2 − x 2 ) 3 2 . {\displaystyle {\begin{aligned}y&={\sqrt {a^{2}-x^{2}}}\\y'&={\frac {-x}{\sqrt {a^{2}-x^{2}}}}\\y''&={\frac {-a^{2}}{\left(a^{2}-x^{2}\right)^{\frac {3}{2}}}}\,.\end{aligned}}} For a semi-circle of radius a in the lower half-plane y = − a 2 − x 2 . {\displaystyle y=-{\sqrt {a^{2}-x^{2}}}\,.} The circle of radius a has a radius of curvature equal to a. === Ellipses === In an ellipse with major axis 2a and minor axis 2b, the vertices on the major axis have the smallest radius of curvature of any points, R = b 2 a {\textstyle R={b^{2} \over a}} ; and the vertices on the minor axis have the largest radius of curvature of any points, R = ⁠a2/b⁠. The radius of curvature of an ellipse as a function of the geocentric coordinate t {\displaystyle t} with tan ⁡ t = y x {\displaystyle \tan t={\frac {y}{x}}} is R ( t ) = ( b 2 cos 2 ⁡ t + a 2 sin 2 ⁡ t ) 3 / 2 a b . {\displaystyle R(t)={\frac {(b^{2}\cos ^{2}t+a^{2}\sin ^{2}t)^{3/2}}{ab}}.} It has its minima at t = 0 {\displaystyle t=0} and t = 180 ∘ {\displaystyle t=180^{\circ }} and its maxima at t = ± 90 ∘ {\displaystyle t=\pm 90^{\circ }} . == Applications == For the use in differential geometry, see Cesàro equation. For the radius of curvature of the Earth (approximated by an oblate ellipsoid); see also: arc measurement Radius of curvature is also used in a three part equation for bending of beams. Radius of curvature (optics) Thin films technologies Printed electronics Minimum railway curve radius AFM probe === Stress in semiconductor structures === Stress in the semiconductor structure involving evaporated thin films usually results from the thermal expansion (thermal stress) during the manufacturing process. Thermal stress occurs because film depositions are usually made above room temperature. Upon cooling from the deposition temperature to room temperature, the difference in the thermal expansion coefficients of the substrate and the film cause thermal stress. Intrinsic stress results from the microstructure created in the film as atoms are deposited on the substrate. Tensile stress results from microvoids (small holes, considered to be defects) in the thin film, because of the attractive interaction of atoms across the voids. The stress in thin film semiconductor structures results in the buckling of the wafers. The radius of the curvature of the stressed structure is related to stress tensor in the structure, and can be described by modified Stoney formula. The topography of the stressed structure including radii of curvature can be measured using optical scanner methods. The modern scanner tools have capability to measure full topography of the substrate and to measure both principal radii of curvature, while providing the accuracy of the order of 0.1% for radii of curvature of 90 meters and more. == See also == == References == == Further reading == do Carmo, Manfredo (1976). Differential Geometry of Curves and Surfaces. Prentice-Hall. ISBN 0-13-212589-7. == External links == The Geometry Center: Principal Curvatures 15.3 Curvature and Radius of Curvature Archived 2021-04-29 at the Wayback Machine Weisstein, Eric W. "Principal Curvatures". MathWorld. Weisstein, Eric W. "Principal Radius of Curvature". MathWorld.
Wikipedia/Radius_of_curvature_(applications)
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have led to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. == History == === Classical theory === In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) === Standard Model === The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. == Overview of the fundamental interactions == In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. == Interactions == === Gravity === Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it slows down the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. === Electroweak interaction === Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. ==== Electromagnetism ==== Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. ==== Weak interaction ==== The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. === Strong interaction === The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. === Higgs interaction === Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form λ i 2 ψ ¯ ϕ ′ ψ = m i ν ψ ¯ ϕ ′ ψ {\displaystyle {\frac {\lambda _{i}}{\sqrt {2}}}{\bar {\psi }}\phi '\psi ={\frac {m_{i}}{\nu }}{\bar {\psi }}\phi '\psi } , with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form V ( r ) = − m i m j m H 2 1 4 π r e − m H c r / ℏ {\displaystyle V(r)=-{\frac {m_{i}m_{j}}{m_{\rm {H}}^{2}}}{\frac {1}{4\pi r}}e^{-m_{\rm {H}}\,c\,r/\hbar }} , with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10−18 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. === Beyond the Standard Model === The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that each of the three fundamental interactions described by the Standard Model is a different manifestation of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated and gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would integrate GUTs with a quantum gravity theory, faces a greater barrier because no quantum gravity theory (e.g., string theory, loop quantum gravity, and twistor theory) has secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles, known as moduli, acquire their masses only through supersymmetry breaking effects and can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), creating a need to explain a nonzero cosmological constant and possibly other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. == See also == Quintessence, a hypothesized fifth force Gerardus 't Hooft Edward Witten Howard Georgi == References == == Bibliography == Davies, Paul (1986), The Forces of Nature, Cambridge Univ. Press 2nd ed. Feynman, Richard (1967), The Character of Physical Law, MIT Press, ISBN 978-0-262-56003-0 Schumm, Bruce A. (2004), Deep Down Things, Johns Hopkins University Press While all interactions are discussed, discussion is especially thorough on the weak. Weinberg, Steven (1993), The First Three Minutes: A Modern View of the Origin of the Universe, Basic Books, ISBN 978-0-465-02437-7 Weinberg, Steven (1994), Dreams of a Final Theory, Basic Books, ISBN 978-0-679-74408-5 Padmanabhan, T. (1998), After The First Three Minutes: The Story of Our Universe, Cambridge Univ. Press, ISBN 978-0-521-62972-0 Perkins, Donald H. (2000), Introduction to High Energy Physics (4th ed.), Cambridge Univ. Press, ISBN 978-0-521-62196-0 Riazuddin (December 29, 2009). "Non-standard interactions" (PDF). NCP 5th Particle Physics Sypnoisis. 1 (1): 1–25. Archived from the original (PDF) on March 3, 2016. Retrieved March 19, 2011.
Wikipedia/Fundamental_forces
Specific force (SF) is a mass-specific quantity defined as the quotient of force per unit mass. S F = F / m {\displaystyle \mathrm {SF} =F/m} It is a physical quantity of kind acceleration, with dimension of length per time squared and units of metre per second squared (m·s−2). It is normally applied to forces other than gravity, to emulate the relationship between gravitational acceleration and gravitational force. It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it. The g-force is an instance of specific force measured in units of the standard gravity (g) instead of m/s², i.e., in multiples of g (e.g., "3 g"). == Type of acceleration == The (mass-)specific force is not a coordinate acceleration, but rather a proper acceleration, which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration. The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces. Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, not the "standard" acceleration that is relative to a coordinate system. == Hydraulics == In open channel hydraulics, specific force ( F s {\displaystyle F_{s}} ) has a different meaning: F s = Q 2 g A + z A {\displaystyle F_{s}={\frac {Q^{2}}{gA}}+zA} where Q is the discharge, g is the acceleration due to gravity, A is the cross-sectional area of flow, and z is the depth of the centroid of flow area A. == See also == Acceleration Proper acceleration == References ==
Wikipedia/Specific_force
In physics and engineering, mass flow rate is the rate at which mass of a substance changes over time. Its unit is kilogram per second (kg/s) in SI units, and slug per second or pound per second in US customary units. The common symbol is m ˙ {\displaystyle {\dot {m}}} (pronounced "m-dot"), although sometimes μ {\displaystyle \mu } (Greek lowercase mu) is used. Sometimes, mass flow rate as defined here is termed "mass flux" or "mass current". Confusingly, "mass flow" is also a term for mass flux, the rate of mass flow per unit of area. == Formulation == Mass flow rate is defined by the limit m ˙ = lim Δ t → 0 Δ m Δ t = d m d t , {\displaystyle {\dot {m}}=\lim _{\Delta t\to 0}{\frac {\Delta m}{\Delta t}}={\frac {dm}{dt}},} i.e., the flow of mass Δ m {\displaystyle \Delta m} through a surface per time Δ t {\displaystyle \Delta t} . The overdot on m ˙ {\displaystyle {\dot {m}}} is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow. == Alternative equations == Mass flow rate can also be calculated by m ˙ = ρ ⋅ V ˙ = ρ ⋅ v ⋅ A = j m ⋅ A , {\displaystyle {\dot {m}}=\rho \cdot {\dot {V}}=\rho \cdot \mathbf {v} \cdot \mathbf {A} =\mathbf {j} _{\text{m}}\cdot \mathbf {A} ,} where The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral: m ˙ = ∬ A ρ v ⋅ d A = ∬ A j m ⋅ d A . {\displaystyle {\dot {m}}=\iint _{A}\rho \mathbf {v} \cdot d\mathbf {A} =\iint _{A}\mathbf {j} _{\text{m}}\cdot d\mathbf {A} .} The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, A {\displaystyle A} , and a unit vector normal to the area, n ^ {\displaystyle \mathbf {\hat {n}} } . The relation is A = A n ^ {\displaystyle \mathbf {A} =A\mathbf {\hat {n}} } . The reason for the dot product is as follows. The only mass flowing through the cross-section is the amount normal to the area, i.e. parallel to the unit normal. This amount is m ˙ = ρ v A cos ⁡ θ , {\displaystyle {\dot {m}}=\rho vA\cos \theta ,} where θ {\displaystyle \theta } is the angle between the unit normal n ^ {\displaystyle \mathbf {\hat {n}} } and the velocity of mass elements. The amount passing through the cross-section is reduced by the factor cos ⁡ θ {\displaystyle \cos \theta } , as θ {\displaystyle \theta } increases less mass passes through. All mass which passes in tangential directions to the area, that is perpendicular to the unit normal, doesn't actually pass through the area, so the mass passing through the area is zero. This occurs when θ = π / 2 {\displaystyle \theta =\pi /2} : m ˙ = ρ v A cos ⁡ ( π / 2 ) = 0. {\displaystyle {\dot {m}}=\rho vA\cos(\pi /2)=0.} These results are equivalent to the equation containing the dot product. Sometimes these equations are used to define the mass flow rate. Considering flow through porous media, a special quantity, superficial mass flow rate, can be introduced. It is related with superficial velocity, v s {\displaystyle v_{s}} , with the following relationship: m ˙ s = v s ⋅ ρ = m ˙ / A {\displaystyle {\dot {m}}_{s}=v_{s}\cdot \rho ={\dot {m}}/A} The quantity can be used in particle Reynolds number or mass transfer coefficient calculation for fixed and fluidized bed systems. == Usage == In the elementary form of the continuity equation for mass, in hydrodynamics: ρ 1 v 1 ⋅ A 1 = ρ 2 v 2 ⋅ A 2 . {\displaystyle \rho _{1}\mathbf {v} _{1}\cdot \mathbf {A} _{1}=\rho _{2}\mathbf {v} _{2}\cdot \mathbf {A} _{2}.} In elementary classical mechanics, mass flow rate is encountered when dealing with objects of variable mass, such as a rocket ejecting spent fuel. Often, descriptions of such objects erroneously invoke Newton's second law F = d ( m v ) / d t {\displaystyle \mathbf {F} =d(m\mathbf {v} )/dt} by treating both the mass m {\displaystyle m} and the velocity v {\displaystyle \mathbf {v} } as time-dependent and then applying the derivative product rule. A correct description of such an object requires the application of Newton's second law to the entire, constant-mass system consisting of both the object and its ejected mass. Mass flow rate can be used to calculate the energy flow rate of a fluid: E ˙ = m ˙ e , {\displaystyle {\dot {E}}={\dot {m}}e,} where e {\displaystyle e} is the unit mass energy of a system. Energy flow rate has SI units of kilojoule per second or kilowatt. == See also == Continuity equation Fluid dynamics Mass flow controller Mass flow meter Mass flux Orifice plate Standard cubic centimetres per minute Thermal mass flow meter Volumetric flow rate == Notes == == References ==
Wikipedia/Mass_flow_rate
The symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation. A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems. Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. == As a kind of invariance == Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve the shape of its surface from any given vantage point. === Invariance in force === The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well. For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges. In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is ⁠1/2⁠m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis. The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged. == Local and global == Symmetries may be broadly classified as global or local. A global symmetry is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a local symmetry is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime coordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories. == Continuous == The two examples of rotational symmetry described above – spherical and cylindrical – are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries. === Spacetime === Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time. Time translation: A physical system may have the same features over a certain interval of time Δt; this is expressed mathematically as invariance under the transformation t → t + a for any real parameters t and t + a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time t0 and also at t0 + a, the particle's total gravitational potential energy will be preserved. Spatial translation: These spatial symmetries are represented by transformations of the form r→ → r→ + a→ and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room. Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry. Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance. Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity. Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant. Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system. Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries. == Discrete == A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, t → − t {\displaystyle t\,\rightarrow -t} . For example, Newton's second law of motion still holds if, in the equation F = m r ¨ {\displaystyle F\,=m{\ddot {r}}} , t {\displaystyle t} is replaced by − t {\displaystyle -t} . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height. Spatial inversion: These are represented by transformations of the form r → → − r → {\displaystyle {\vec {r}}\,\rightarrow -{\vec {r}}} and indicate an invariance property of a system when the coordinates are 'inverted'. Stated another way, these are symmetries between a certain object and its mirror image. Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries. === C, P, and T === The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced. C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle. P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes. This excludes weak interactions as demonstrated by Chien-Shiung Wu. T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (the future and the past are not symmetrical) but explained by the fact that the Standard Model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the Big Bang and the resulting low-entropy state in the "future". Since we perceive the "past" ("future") as having lower (higher) entropy than the present, the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past, and vice versa. These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics. === Supersymmetry === A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry. == Generalized symmetries == Generalized symmetries encompass a number of recently recognized generalizations of the concept of a global symmetry. These include higher form symmetries, higher group symmetries, non-invertible symmetries, and subsystem symmetries. == Mathematics of physical symmetry == The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists. Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group). Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S3. A type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.) Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology). === Conservation laws and symmetry === The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy. The following table summarizes some fundamental symmetries and the associated conserved quantity. == Mathematics == Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations is equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra. A general coordinate transformation described as the general field h ( x ) {\displaystyle h(x)} (also known as a diffeomorphism) has the infinitesimal effect on a scalar ϕ ( x ) {\displaystyle \phi (x)} , spinor ψ ( x ) {\displaystyle \psi (x)} or vector field A ( x ) {\displaystyle A(x)} that can be expressed (using the Einstein summation convention): δ ϕ ( x ) = h μ ( x ) ∂ μ ϕ ( x ) {\displaystyle \delta \phi (x)=h^{\mu }(x)\partial _{\mu }\phi (x)} δ ψ α ( x ) = h μ ( x ) ∂ μ ψ α ( x ) + ∂ μ h ν ( x ) σ μ ν α β ψ β ( x ) {\displaystyle \delta \psi ^{\alpha }(x)=h^{\mu }(x)\partial _{\mu }\psi ^{\alpha }(x)+\partial _{\mu }h_{\nu }(x)\sigma _{\mu \nu }^{\alpha \beta }\psi ^{\beta }(x)} δ A μ ( x ) = h ν ( x ) ∂ ν A μ ( x ) + A ν ( x ) ∂ μ h ν ( x ) {\displaystyle \delta A_{\mu }(x)=h^{\nu }(x)\partial _{\nu }A_{\mu }(x)+A_{\nu }(x)\partial _{\mu }h^{\nu }(x)} Without gravity only the Poincaré symmetries are preserved which restricts h ( x ) {\displaystyle h(x)} to be of the form: h μ ( x ) = M μ ν x ν + P μ {\displaystyle h^{\mu }(x)=M^{\mu \nu }x_{\nu }+P^{\mu }} where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field: δ ψ α ( x ) = λ ( x ) . τ α β ψ β ( x ) {\displaystyle \delta \psi ^{\alpha }(x)=\lambda (x).\tau ^{\alpha \beta }\psi ^{\beta }(x)} δ A μ ( x ) = ∂ μ λ ( x ) , {\displaystyle \delta A_{\mu }(x)=\partial _{\mu }\lambda (x),} where τ {\displaystyle \tau } are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types. Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind: δ ϕ ( x ) = Ω ( x ) ϕ ( x ) {\displaystyle \delta \phi (x)=\Omega (x)\phi (x)} If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form: h μ ( x ) = M μ ν x ν + P μ + D x μ + K μ | x | 2 − 2 K ν x ν x μ , {\displaystyle h^{\mu }(x)=M^{\mu \nu }x_{\nu }+P^{\mu }+Dx_{\mu }+K^{\mu }|x|^{2}-2K^{\nu }x_{\nu }x_{\mu },} with D generating scale transformations and K generating special conformal transformations. For example, N = 4 supersymmetric Yang–Mills theory has this symmetry while general relativity does not although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models. In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields. == See also == == References == === General readers === === Technical readers === == External links == The Feynman Lectures on Physics Vol. I Ch. 52: Symmetry in Physical Laws Stanford Encyclopedia of Philosophy: "Symmetry"—by K. Brading and E. Castellani. Pedagogic Aids to Quantum Field Theory Click on link to Chapter 6: Symmetry, Invariance, and Conservation for a simplified, step-by-step introduction to symmetry in physics.
Wikipedia/Symmetry_in_physics
In the special theory of relativity, four-force is a four-vector that replaces the classical force. == In special relativity == The four-force is defined as the rate of change in the four-momentum of a particle with respect to the particle's proper time. Hence,: F = d P d τ . {\displaystyle \mathbf {F} ={\mathrm {d} \mathbf {P} \over \mathrm {d} \tau }.} For a particle of constant invariant mass m > 0 {\displaystyle m>0} , the four-momentum is given by the relation P = m U {\displaystyle \mathbf {P} =m\mathbf {U} } , where U = γ ( c , u ) {\displaystyle \mathbf {U} =\gamma (c,\mathbf {u} )} is the four-velocity. In analogy to Newton's second law, we can also relate the four-force to the four-acceleration, A {\displaystyle \mathbf {A} } , by equation: F = m A = ( γ f ⋅ u c , γ f ) . {\displaystyle \mathbf {F} =m\mathbf {A} =\left(\gamma {\mathbf {f} \cdot \mathbf {u} \over c},\gamma {\mathbf {f} }\right).} Here f = d d t ( γ m u ) = d p d t {\displaystyle {\mathbf {f} }={\mathrm {d} \over \mathrm {d} t}\left(\gamma m{\mathbf {u} }\right)={\mathrm {d} \mathbf {p} \over \mathrm {d} t}} and f ⋅ u = d d t ( γ m c 2 ) = d E d t . {\displaystyle {\mathbf {f} \cdot \mathbf {u} }={\mathrm {d} \over \mathrm {d} t}\left(\gamma mc^{2}\right)={\mathrm {d} E \over \mathrm {d} t}.} where u {\displaystyle \mathbf {u} } , p {\displaystyle \mathbf {p} } and f {\displaystyle \mathbf {f} } are 3-space vectors describing the velocity, the momentum of the particle and the force acting on it respectively; and E {\displaystyle E} is the total energy of the particle. == Including thermodynamic interactions == From the formulae of the previous section it appears that the time component of the four-force is the power expended, f ⋅ u {\displaystyle \mathbf {f} \cdot \mathbf {u} } , apart from relativistic corrections γ / c {\displaystyle \gamma /c} . This is only true in purely mechanical situations, when heat exchanges vanish or can be neglected. In the full thermo-mechanical case, not only work, but also heat contributes to the change in energy, which is the time component of the energy–momentum covector. The time component of the four-force includes in this case a heating rate h {\displaystyle h} , besides the power f ⋅ u {\displaystyle \mathbf {f} \cdot \mathbf {u} } . Note that work and heat cannot be meaningfully separated, though, as they both carry inertia. This fact extends also to contact forces, that is, to the stress–energy–momentum tensor. Therefore, in thermo-mechanical situations the time component of the four-force is not proportional to the power f ⋅ u {\displaystyle \mathbf {f} \cdot \mathbf {u} } but has a more generic expression, to be given case by case, which represents the supply of internal energy from the combination of work and heat, and which in the Newtonian limit becomes h + f ⋅ u {\displaystyle h+\mathbf {f} \cdot \mathbf {u} } . == In general relativity == In general relativity the relation between four-force, and four-acceleration remains the same, but the elements of the four-force are related to the elements of the four-momentum through a covariant derivative with respect to proper time. F λ := D P λ d τ = d P λ d τ + Γ λ μ ν U μ P ν {\displaystyle F^{\lambda }:={\frac {DP^{\lambda }}{d\tau }}={\frac {dP^{\lambda }}{d\tau }}+\Gamma ^{\lambda }{}_{\mu \nu }U^{\mu }P^{\nu }} In addition, we can formulate force using the concept of coordinate transformations between different coordinate systems. Assume that we know the correct expression for force in a coordinate system at which the particle is momentarily at rest. Then we can perform a transformation to another system to get the corresponding expression of force. In special relativity the transformation will be a Lorentz transformation between coordinate systems moving with a relative constant velocity whereas in general relativity it will be a general coordinate transformation. Consider the four-force F μ = ( F 0 , F ) {\displaystyle F^{\mu }=(F^{0},\mathbf {F} )} acting on a particle of mass m {\displaystyle m} which is momentarily at rest in a coordinate system. The relativistic force f μ {\displaystyle f^{\mu }} in another coordinate system moving with constant velocity v {\displaystyle v} , relative to the other one, is obtained using a Lorentz transformation: f = F + ( γ − 1 ) v v ⋅ F v 2 , f 0 = γ β ⋅ F = β ⋅ f . {\displaystyle {\begin{aligned}\mathbf {f} &=\mathbf {F} +(\gamma -1)\mathbf {v} {\mathbf {v} \cdot \mathbf {F} \over v^{2}},\\f^{0}&=\gamma {\boldsymbol {\beta }}\cdot \mathbf {F} ={\boldsymbol {\beta }}\cdot \mathbf {f} .\end{aligned}}} where β = v / c {\displaystyle {\boldsymbol {\beta }}=\mathbf {v} /c} . In general relativity, the expression for force becomes f μ = m D U μ d τ {\displaystyle f^{\mu }=m{DU^{\mu } \over d\tau }} with covariant derivative D / d τ {\displaystyle D/d\tau } . The equation of motion becomes m d 2 x μ d τ 2 = f μ − m Γ ν λ μ d x ν d τ d x λ d τ , {\displaystyle m{d^{2}x^{\mu } \over d\tau ^{2}}=f^{\mu }-m\Gamma _{\nu \lambda }^{\mu }{dx^{\nu } \over d\tau }{dx^{\lambda } \over d\tau },} where Γ ν λ μ {\displaystyle \Gamma _{\nu \lambda }^{\mu }} is the Christoffel symbol. If there is no external force, this becomes the equation for geodesics in the curved space-time. The second term in the above equation, plays the role of a gravitational force. If f f α {\displaystyle f_{f}^{\alpha }} is the correct expression for force in a freely falling frame ξ α {\displaystyle \xi ^{\alpha }} , we can use then the equivalence principle to write the four-force in an arbitrary coordinate x μ {\displaystyle x^{\mu }} : f μ = ∂ x μ ∂ ξ α f f α . {\displaystyle f^{\mu }={\partial x^{\mu } \over \partial \xi ^{\alpha }}f_{f}^{\alpha }.} == Examples == In special relativity, Lorentz four-force (four-force acting on a charged particle situated in an electromagnetic field) can be expressed as: f μ = q F μ ν U ν , {\displaystyle f_{\mu }=qF_{\mu \nu }U^{\nu },} where F μ ν {\displaystyle F_{\mu \nu }} is the electromagnetic tensor, U ν {\displaystyle U^{\nu }} is the four-velocity, and q {\displaystyle q} is the electric charge. == See also == four-vector four-velocity four-acceleration four-momentum four-gradient == References == Rindler, Wolfgang (1991). Introduction to Special Relativity (2nd ed.). Oxford: Oxford University Press. ISBN 0-19-853953-3.
Wikipedia/Four-force
In mechanics, strain is defined as relative deformation, compared to a reference position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered. Strain has dimension of a length ratio, with SI base units of meter per meter (m/m). Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage. Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm/m and nm/m. Strain can be formulated as the spatial derivative of displacement: ε ≐ ∂ ∂ X ( x − X ) = F ′ − I , {\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},} where I is the identity tensor. The displacement of a body may be expressed in the form x = F(X), where X is the reference position of material points of the body; displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body. The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion. A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion. The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions. If there is an increase in length of the material line, the normal strain is called tensile strain; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain. == Strain regimes == Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories: Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel. Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements. == Strain measures == In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. === Engineering strain === Engineering strain, also known as Cauchy strain, is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain e, which equals the relative elongation or the change in length ΔL per unit of the original length L of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have e = Δ L L = l − L L {\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}} , where e is the engineering normal strain, L is the original length of the fiber and l is the final length of the fiber. The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate. === Stretch ratio === The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length l and the initial length L of the material line. λ = l L {\displaystyle \lambda ={\frac {l}{L}}} The extension ratio λ is related to the engineering strain e by e = λ − 1 {\displaystyle e=\lambda -1} This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity. The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios. === Logarithmic strain === The logarithmic strain ε, also called, true strain or Hencky strain. Considering an incremental strain (Ludwik) δ ε = δ l l {\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}} the logarithmic strain is obtained by integrating this incremental strain: ∫ δ ε = ∫ L l δ l l ε = ln ⁡ ( l L ) = ln ⁡ ( λ ) = ln ⁡ ( 1 + e ) = e − e 2 2 + e 3 3 − ⋯ {\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}} where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path. === Green strain === The Green strain is defined as: ε G = 1 2 ( l 2 − L 2 L 2 ) = 1 2 ( λ 2 − 1 ) {\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)} === Almansi strain === The Euler-Almansi strain is defined as ε E = 1 2 ( l 2 − L 2 l 2 ) = 1 2 ( 1 − 1 λ 2 ) {\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)} == Strain tensor == The (infinitesimal) strain tensor (symbol ε {\displaystyle {\boldsymbol {\varepsilon }}} ) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components." ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer". Thus, strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress. The strain tensor can then be expressed in terms of normal and shear components as: ε _ _ = [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ε x x 1 2 γ x y 1 2 γ x z 1 2 γ y x ε y y 1 2 γ y z 1 2 γ z x 1 2 γ z y ε z z ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}} === Geometric setting === Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy, which, after deformation, takes the form of a rhombus. The deformation is described by the displacement field u. From the geometry of the adjacent figure we have l e n g t h ( A B ) = d x {\displaystyle \mathrm {length} (AB)=dx} and l e n g t h ( a b ) = ( d x + ∂ u x ∂ x d x ) 2 + ( ∂ u y ∂ x d x ) 2 = d x 2 ( 1 + ∂ u x ∂ x ) 2 + d x 2 ( ∂ u y ∂ x ) 2 = d x ( 1 + ∂ u x ∂ x ) 2 + ( ∂ u y ∂ x ) 2 {\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}} For very small displacement gradients the squares of the derivative of u y {\displaystyle u_{y}} and u x {\displaystyle u_{x}} are negligible and we have l e n g t h ( a b ) ≈ d x ( 1 + ∂ u x ∂ x ) = d x + ∂ u x ∂ x d x {\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx} === Normal strain === For an isotropic material that obeys Hooke's law, a normal stress will cause a normal strain. Normal strains produce dilations. The normal strain in the x-direction of the rectangular element is defined by ε x = extension original length = l e n g t h ( a b ) − l e n g t h ( A B ) l e n g t h ( A B ) = ∂ u x ∂ x {\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}} Similarly, the normal strain in the y- and z-directions becomes ε y = ∂ u y ∂ y , ε z = ∂ u z ∂ z {\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}} === Shear strain === The engineering shear strain (γxy) is defined as the change in angle between lines AC and AB. Therefore, γ x y = α + β {\displaystyle \gamma _{xy}=\alpha +\beta } From the geometry of the figure, we have tan ⁡ α = ∂ u y ∂ x d x d x + ∂ u x ∂ x d x = ∂ u y ∂ x 1 + ∂ u x ∂ x tan ⁡ β = ∂ u x ∂ y d y d y + ∂ u y ∂ y d y = ∂ u x ∂ y 1 + ∂ u y ∂ y {\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}} For small displacement gradients we have ∂ u x ∂ x ≪ 1 ; ∂ u y ∂ y ≪ 1 {\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1} For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α, tan β ≈ β. Therefore, α ≈ ∂ u y ∂ x ; β ≈ ∂ u x ∂ y {\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}} thus γ x y = α + β = ∂ u y ∂ x + ∂ u x ∂ y {\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}} By interchanging x and y and ux and uy, it can be shown that γxy = γyx. Similarly, for the yz- and xz-planes, we have γ y z = γ z y = ∂ u y ∂ z + ∂ u z ∂ y , γ z x = γ x z = ∂ u z ∂ x + ∂ u x ∂ z {\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}} === Volume strain === == Metric tensor == A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor. == See also == Stress measures Strain rate Strain tensor == References ==
Wikipedia/Strain_(physics)
Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric conditions over timescales longer than those of weather, focusing on average climate conditions and their variability over time. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System. Experimental instruments used in atmospheric science include satellites, rocketsondes, radiosondes, weather balloons, radars, and lasers. The term aerology (from Greek ἀήρ, aēr, "air"; and -λογία, -logia) is sometimes used as an alternative term for the study of Earth's atmosphere; in other definitions, aerology is restricted to the free atmosphere, the region above the planetary boundary layer. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. == Atmospheric chemistry == Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other areas of study such as climatology. The composition and chemistry of the atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere has been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, photochemical smog and global warming. Atmospheric chemistry seeks to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated. Atmospheric chemistry plays a major role in understanding the concentration of gases in our atmosphere that contribute to climate change. More specifically, when combined with atmospheric physics and biogeochemistry, it is useful in terms of studying the influence of greenhouse gases like CO2, N2O, and CH4, on Earth's radiative balance. According to UNEP, CO2 emissions increased to a new record of 57.1 GtCO2e, up 1.3% from the previous year. Previous GHG emissions growth from 2010-2019 averaged only +0.8% yearly, illustrating the dramatic increase in global emissions. The Global Nitrous Oxide Budget cited that atmospheric N2O has increased by roughly 25% between 1750 and 2022, having the fasted annual growth rate in 2020 and 2021. Atmospheric chemistry is critical in understanding what contributes to our changing climate. The warming of our climate is caused when CO2 emissions, constrained by biogeochemistry, are above a 0% increase. In order to stop temperatures from rising, there must be no CO2 emissions. By understanding the chemical composition and emission rates in our atmosphere alongside economic factors, researchers are able to trace back emissions back to their sources. About 26% of the 2023 GtCO2e was used for power, 15% for transportation, 11% in industry, 11% in agriculture, etc. In order to successfully reverse the human-driven damage contributing to global climate change, cuts of nearly 42% are needed by 2030 and are to be implementing using government intervention. This is one example of how atmospheric chemistry goes hand-in-hand with social and political policy, biogeochemistry, and economic factors. == Atmospheric dynamics == Atmospheric dynamics is the study of motion systems of meteorological importance, integrating observations at multiple locations and times and theories. Common topics studied include diverse phenomena such as thunderstorms, tornadoes, gravity waves, tropical cyclones, extratropical cyclones, jet streams, and global-scale circulations. The goal of dynamical studies is to explain the observed circulations on the basis of fundamental principles from physics. The objectives of such studies incorporate improving weather forecasting, developing methods for predicting seasonal and interannual climate fluctuations, and understanding the implications of human-induced perturbations (e.g., increased carbon dioxide concentrations or depletion of the ozone layer) on the global climate. == Atmospheric physics == Atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, chemical models, radiation balancing, and energy transfer processes in the atmosphere and underlying oceans and land. In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics, each of which incorporate high levels of mathematics and physics. Atmospheric physics has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. In the United Kingdom, atmospheric studies are underpinned by the Meteorological Office. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The U.S. National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. The Earth's magnetic field and the solar wind interact with the atmosphere, creating the ionosphere, Van Allen radiation belts, telluric currents, and radiant energy. Recent studies in atmospheric physics often rely on satellite-based observation. One example includes CALIPSO, or the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation. The CALIPSO mission, engineered by NASA and the Centre National d'Etudes Spatiales/CNES, studies how clouds and airborne particles play a role in regulating the weather, climate, and quality of Earth's atmosphere. According to NASA, this mission uses methods like retrieval algorithm development, climatology development, spectroscopy, weather and climate model evaluation, and cloudy radiative transfer models in addition to atmospheric physics concepts to understand the physics involved in Earth atmospheric regulation. == Climatology == Climatology is a science that derives knowledge and practices from the more specialized disciplines of meteorology, oceanography, geology, biology, and astronomy to study climate. In contrast to meteorology, which studies short-term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over timescales ranging from years to millennia, as well as changes in long-term average weather patterns. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climate variability and current ongoing global warming. Additionally, the occurrence of past climates on Earth, such as those arising from glacial periods and interglacials, can be used to predict future changes in climate. Oftentimes, climatology is studied in conjunction with another specialized discipline. One recent scientific study that utilizes topics in climatology, oceanology, and even economics is entitled "Concerns about El Nino-Southern Oscillation and the Atlantic Meridional Overturning Circulation with an Increasingly Warm Ocean." Scientists under New Insights in Climate Science found that Earth is at risk of El Nino events of greater extremes and overall climate instability given new information regarding the El Nino Southern Oscillation (ENSO) and the Atlantic Meridional Overturning Circulation (AMOC). ENSO describes a recurring climate pattern in which the temperature of waters in the central and eastern tropical Pacific Ocean changes periodically. AMOC is best described by NOAA as "a system of ocean currents that circulates water within the Atlantic Ocean, bringing warm water north and cold water south". This research has revealed that the collapse of the AMOC appears to be occurring sooner than when earlier models had predicted. It also expands on the fact that our economic and social systems are more vulnerable to El Nino impacts than previously thought. The study of climatology is vital in understanding current climate risks. Research is necessary to mitigate and monitor the efforts put forth towards are ever-evolving climate. Strengthening our knowledge within the realm of climatology allows us to better prepare for the impacts of extreme El Nino events, such as amplified droughts, floods, and heat extremes. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere, the oceans and land surface (particularly vegetation, land use and topography), as well as the chemical and physical composition of the atmosphere. Related disciplines include astrophysics, atmospheric physics, chemistry, ecology, physical geography, geology, geophysics, glaciology, hydrology, oceanography, and volcanology. == Aeronomy == Aeronomy is the scientific study of the upper atmosphere of the Earth — the atmospheric layers above the stratopause — and corresponding regions of the atmospheres of other planets, where the entire atmosphere may correspond to the Earth's upper atmosphere or a portion of it. A branch of both atmospheric chemistry and atmospheric physics, aeronomy contrasts with meteorology, which focuses on the layers of the atmosphere below the stratopause. In atmospheric regions studied by aeronomers, chemical dissociation and ionization are important phenomena. == Atmospheres on other celestial bodies == All of the Solar System's planets have atmospheres. This is because their gravity is strong enough to keep gaseous particles close to the surface. Larger gas giants are massive enough to keep large amounts of the light gases hydrogen and helium close by, while the smaller planets lose these gases into space. The composition of the Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. Much of Mercury's atmosphere has been blasted away by the solar wind. The only moon that has retained a dense atmosphere is Titan. There is a thin atmosphere on Triton, and a trace of an atmosphere on the Moon. Planetary atmospheres are affected by the varying degrees of energy received from either the Sun or their interiors, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), an Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to possess such a weather system, similar to the Great Red Spot but twice as large. Hot Jupiters have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides which produce supersonic winds, although the day and night sides of HD 189733b appear to have very similar temperatures, indicating that planet's atmosphere effectively redistributes the star's energy around the planet. == See also == Air pollution == References == == External links == Atmospheric fluid dynamics applied to weather maps – Principles such as Advection, Deformation and Vorticity National Center for Atmospheric Research (NCAR) Archives, documents the history of the atmospheric sciences
Wikipedia/Atmospheric_science
Force control is the control of the force with which a machine or the manipulator of a robot acts on an object or its environment. By controlling the contact force, damage to the machine as well as to the objects to be processed and injuries when handling people can be prevented. In manufacturing tasks, it can compensate for errors and reduce wear by maintaining a uniform contact force. Force control achieves more consistent results than position control, which is also used in machine control. Force control can be used as an alternative to the usual motion control, but is usually used in a complementary way, in the form of hybrid control concepts. The acting force for control is usually measured via force transducers or estimated via the motor current. Force control has been the subject of research for almost three decades and is increasingly opening up further areas of application thanks to advances in sensor and actuator technology and new control concepts. Force control is particularly suitable for contact tasks that serve to mechanically process workpieces, but it is also used in telemedicine, service robot and the scanning of surfaces. For force measurement, force sensors exist that can measure forces and torques in all three spatial directions. Alternatively, the forces can also be estimated without sensors, e.g. on the basis of the motor currents. Indirect force control by modeling the robot as a mechanical resistance (impedance) and direct force control in parallel or hybrid concepts are used as control concepts. Adaptive approaches, fuzzy controllers and machine learning for force control are currently the subject of research. == General == Controlling the contact force between a manipulator and its environment is an increasingly important task in the environment of mechanical manufacturing, as well as industrial and service robot. One motivation for the use of force control is safety for man and machine. For various reasons, movements of the robot or machine parts may be blocked by obstacles while the program is running. In service robot these can be moving objects or people, in industrial robotics problems can occur with cooperating robots, changing work environments or an inaccurate environmental model. If the trajectory is misaligned in classical motion control and thus it is not possible to approach the programmed robot pose(s), the motion control will increase the manipulated variable - usually the motor current - in order to correct the position error. The increase of the manipulated variable can have the following effects: The obstacle is removed or damaged/destroyed. The machine is damaged or destroyed. The manipulated variable limits are exceeded and the robot controller switches off. A force control system can prevent this by regulating the maximum force of the machine in these cases, thus avoiding damage or making collisions detectable at an early stage. In mechanical manufacturing tasks, unevenness of the workpiece often leads to problems with motion control. As can be seen in the adjacent figure, surface unevenness causes the tool to penetrate too far into the surface during position control (red) P 1 ′ {\displaystyle P'_{1}} or lose contact with the workpiece during position control (red) P 2 ′ {\displaystyle P'_{2}} . This results, for example, in an alternating force effect on the workpiece and tool during grinding and polishing. Force control (green) is useful here, as it ensures uniform material removal through constant contact with the workpiece. == Application == In force control, a basic distinction can be made between applications with pronounced contact and applications with potential contact. We speak of pronounced contact when the contact of the machine with the environment or the workpiece is a central component of the task and is explicitly controlled. This includes, above all, tasks of mechanical deformation and surface machining. In tasks with potential contact, the process function variable is the positioning of the machine or its parts. Larger contact forces between machine and environment occur due to dynamic environment or inaccurate environment model. In this case, the machine should yield to the environment and avoid large contact forces. The main applications of force control today are mechanical manufacturing operations. This means in particular manufacturing tasks such as grinding, polishing and deburring as well as force-controlled processes such as controlled joining, bending and pressing of bolts into prefabricated bores. Another common use of force control is scanning unknown surfaces. Here, force control is used to set a constant contact pressure in the normal direction of the surface and the scanning head is moved in the surface direction via position control. The surface can then be described in Cartesian coordinates via direct kinematics. Other applications of force control with potential contact can be found in medical technology and cooperating robots. Robots used in telemedicine, i.e. robot-assisted medical operations, can avoid injuries more effectively via force control. In addition, direct feedback of the measured contact forces to the operator by means of a force feedback control device is of great interest here. Possible applications for this extend to internet-based teleoperations. In principle, force control is also useful wherever machines and robots cooperate with each other or with humans, as well as in environments where the environment is not described exactly or is dynamic and cannot be described exactly. Here, force control helps to deal with obstacles and deviations in the environmental model and to avoid damage. == History == The first important work on force control was published in 1980 by John Kenneth Salisbury at Stanford University. In it, he describes a method for active stiffness control, a simple form of impedance control. However, the method does not yet allow a combination with motion control, but here force control is performed in all spatial directions. The position of the surface must therefore be known. Because of the lower performance of robot controllers of that time, force control could only be performed on mainframe computers. Thus, a controller cycle of ≈100 ms was achieved. In 1981, Raibert and Craig presented a paper on hybrid force/position control which is still important today. In this paper, they describe a method in which a matrix (separation matrix) is used to explicitly specify for all spatial directions whether motion or force control is to be used. Raibert and Craig merely sketch the controller concepts and assume them to be feasible. In 1989, Koivo presented an extended exposition of the concepts of Raibert and Craig. Precise knowledge of the surface position is still necessary here, which still does not allow for the typical tasks of force control today, such as scanning surfaces. Force control has been the subject of intense research over the past two decades and has made great strides with the advancement of sensor technology and control algorithms. For some years now, the major automation technology manufacturers have been offering software and hardware packages for their controllers to allow force control. Modern machine controllers are capable of force control in one spatial direction in real time computing with a cycle time of less than 10 ms. == Force measurement == To close the force control loop in the sense of a closed-loop control, the instantaneous value of the contact force must be known. The contact force can either be measured directly or estimated. === Direct force measurement === The trivial approach to force control is the direct measurement of the occurring contact forces via force/torque sensors at the end effector of the machine or at the wrist of the industrial robot. Force/torque sensors measure the occurring forces by measuring the deformation at the sensor. The most common way to measure deformation is by means of strain gauges. In addition to the widely used strain gauges made of variable electrical resistances, there are also other versions that use piezoelectric, optical or capacitive principles for measurement. In practice, however, they are only used for special applications. Capacitive strain gages, for example, can also be used in the high-temperature range above 1000 °C. Strain gages are designed to have as linear a relationship as possible between strain and electrical resistance within the working space. In addition, several possibilities exist to reduce measurement errors and interference. To exclude temperature influences and increase measurement reliability, two strain gauges can be arranged in a complementary manner. Modern force/torque sensors measure both forces and torques in all three spatial directions and are available with almost any value range. The accuracy is usually in the per mil range of the maximum measured value. The sampling rates of the sensors are in the range of about 1 kHz. An extension of the 6-axis force/torque sensors are 12- and 18-axis sensors which, in addition to the six force or torque components, are also capable of measuring six velocity and acceleration components each. === Six-axis force/torque sensor === In modern applications, so-called six-axis force/torque sensors are frequently used. These are mounted between the robot hand and the end effector and can record both forces and torques in all three spatial directions. For this purpose, they are equipped with six or more strain gauges (possibly strain measurement bridges) that record deformations in the micrometer range. These deformations are converted into three force and torque components each via a calibration matrix. Force/torque sensors contain a digital signal processor that continuously acquires and filters the sensor data (strain) in parallel, calculates the measurement data (forces/torques) and makes it available via the sensor's communication interface. The measured values correspond to the forces at the sensor and usually still have to be converted into the forces and torques at the end effector or tool via a suitable transformation. Since force/torque sensors are still relatively expensive (between €4,000 and €15,000) and very sensitive to overloads and disturbances, they - and thus force control - have been reluctantly used in industry. Indirect force measurement or estimation is one solution, allowing force control without costly and disturbance-prone force sensors. === Force estimation === A cost-saving alternative to direct force measurement is force estimation (also known as "indirect force measurement"). This makes it possible to dispense with the use of force/torque sensors. In addition to cost savings, dispensing with these sensors has other advantages: Force sensors are usually the weakest link in the mechanical chain of the machine or robot system, so dispensing with them brings greater stability and less susceptibility to mechanical faults. In addition, dispensing with force/torque sensors brings greater safety, since there is no need for sensor cables to be routed out and protected directly at the manipulator's wrist. A common method for indirect force measurement or force estimation is the measurement of the motor currents applied for motion control. With some restrictions, these are proportional to the torque applied to the driven robot axis. Adjusted for gravitational, inertial and frictional effects, the motor currents are largely linear to the torques of the individual axes. The contact force at the end effector can be determined via the torques thus known. === Separation of dynamic and static forces === During force measurement and force estimation, filtering of the sensor signals may be necessary. Numerous side effects and secondary forces can occur which do not correspond to the measurement of the contact force. This is especially true if a larger load mass is mounted on the manipulator. This interferes with the force measurement when the manipulator moves with high accelerations. To be able to adjust the measurement for side effects, both an accurate dynamic model of the machine and a model or estimate of the load must be available. This estimate can be determined via reference movements (free movement without object contact). After estimating the load, the measurement or estimate of the forces can be adjusted for Coriolis, centripetal and centrifugal forces, gravitational and frictional effects, and inertia. Adaptive approaches can also be used here to continuously adjust the estimate of the load. == Control concepts == Various control concepts are used for force control. Depending on the desired behavior of the system, a distinction is made between the concepts of direct force control and indirect control via specification of compliance or mechanical impedance. As a rule, force control is combined with motion control. Concepts for force control have to consider the problem of coupling between force and position: If the manipulator is in contact with the environment, a change of the position also means a change of the contact force. === Impedance control === Impedance control, or compliance control, regulates the compliance of the system, i.e., the link between force and position upon object contact. Compliance is defined in the literature as a "measure of the robot's ability to counteract contact forces." There are passive and active approaches to this. Here, the compliance of the robot system is modeled as mechanical impedance, which describes the relationship between applied force and resulting velocity. Here, the robot's machine or manipulator is considered as a mechanical resistance with positional constraints imposed by the environment. Accordingly, the causality of mechanical impedance describes that a movement of the robot results in a force. In mechanical admittance, on the other hand, a force applied to the robot results in a resulting motion. ==== Passive impedance control ==== Passive compliance control (also known as compliance control) does not require force measurement because there is no explicit force control. Instead, the manipulator and/or end effector is flexibly designed in a way that can minimize contact forces that occur during the task to be performed. Typical applications include insertion and gripping operations. The end effector is designed in such a way that it allows translational and rotational deviations orthogonal to the gripping or insertion direction, but has high stiffness in the gripping or insertion direction. The figure opposite shows a so-called Remote Center of Compliance (RCC) that makes this possible. As an alternative to an RCC, the entire machine can also be made structurally elastic. Passive impedance control is a very good solution in terms of system dynamics, since there are no latency due to the control. However, passive compliance control is often limited by the mechanical specification of the end effector in the task and cannot be readily applied to different and changing tasks or environmental conditions. ==== Active impedance control ==== Active compliance control refers to the control of the manipulator based on a deviation of the end effector. This is particularly suitable for guiding robots by an operator, for example as part of a teach-in process. Active compliance control is based on the idea of representing the system of machine and environment as a spring-damper-mass system. The force F {\displaystyle F} and the motion (position x ( t ) {\displaystyle x(t)\!\,} , velocity x ˙ ( t ) {\displaystyle {\dot {x}}(t)} , and acceleration x ¨ ( t ) {\displaystyle {\ddot {x}}(t)} are directly related via the spring-damper-mass equation: F ( t ) = c ⋅ x ( t ) + d ⋅ x ˙ ( t ) + m ⋅ x ¨ ( t ) {\displaystyle F(t)=c\cdot x(t)+d\cdot {\dot {x}}(t)+m\cdot {\ddot {x}}(t)} The compliance or mechanical impedance of the system is determined by the stiffness c {\displaystyle c} , the damping d {\displaystyle d} and the inertia m {\displaystyle m} and can be influenced by these three variables. The control is given a mechanical target impedance via these three variables, which is achieved by the machine control. The figure shows the block diagram of a force-based impedance control. The impedance in the block diagram represents the mentioned components L, A and . A position-based impedance control can be designed analogously with internal position or motion control. Alternatively and analogously, the compliance (admittance) can be controlled instead of the resistance. In contrast to the impedance control, the admittance appears in the control law as the reciprocal of the impedance. === Direct force control === The above concepts are so-called indirect force control, since the contact force is not explicitly specified as a command variable, but is determined indirectly via the controller parameters damping, stiffness and (virtual) mass. Direct force control is presented below. Direct force control uses the desired force as a setpoint within a closed control loop. It is implemented as a parallel force/position control in the form of a cascade control or as a hybrid force/position control in which switching takes place between position and force control. ==== Parallel force/position control ==== One possibility for force control is parallel force/position control. The control is designed as a cascade control and has an external force control loop and an internal position control loop. As shown in the following figure, a corresponding infeed correction is calculated from the difference between the nominal and actual force. This infeed correction is offset against the position command values, whereby in the case of the fusion of X s o l l {\displaystyle X_{soll}} and X k o r r {\displaystyle X_{korr}} , the position command of force control ( X k o r r {\displaystyle X_{korr}} )has a higher priority, i.e. a position error is tolerated in favor of the correct force control. The offset value is the input variable for the inner position control loop. Analogous to an inner position control, an inner velocity control can also take place, which has a higher dynamic. In this case, the inner control loop should have a saturation in order not to generate a (theoretically) arbitrarily increasing velocity in the free movement until contact is made. ==== Hybrid force/position control ==== An improvement over the above concepts is offered by hybrid force/position control, which works with two separate control systems and can also be used with hard, inflexible contact surfaces. In hybrid force/position control, the space is divided into a constrained and an unconstrained space. The constrained space contains restrictions, for example in the form of obstacles, and does not allow free movement; the unconstrained space allows free movement. Each dimension of the space is either constrained or unconstrained. In hybrid force control, force control is used for the restricted space, and position control is used for the unrestricted space. The figure shows such a control. The matrix Σ indicates which space directions are restricted and is a diagonal matrix consisting of zeros and ones. Which spatial direction is restricted and which is unrestricted can, for example, be specified statically. Force and position control is then explicitly specified for each spatial direction; the matrix Σ is then static. Another possibility is to switch the matrix Σ dynamically on the basis of force measurement. In this way, it is possible to switch from position control to force control for individual spatial directions when contact or collision is established. In the case of contact tasks, all spatial directions would be motion-controlled in the case of free movement, and after contact is established, the contact direction would be switched to force control by selecting the appropriate matrix Σ. == Research == In recent years, the subject of research has increasingly been adaptive concepts, the use of fuzzy control system and machine learning, and force-based whole-body control. === Adaptive force control === The previously mentioned, non-adaptive concepts are based on an exact knowledge of the dynamic process parameters. These are usually determined and adjusted by experiments and calibration. Problems can arise due to measurement errors and variable loads. In adaptive force control, position-dependent and thus time-variable parts of the system are regarded as parameter fluctuations and are constantly adapted in the course of the control by adaptation. Due to the changing control, no guarantee can be given for dynamic stability of the system. Adaptive control is therefore usually first used offline and the results are intensively tested in simulation before being used on the real system. === Fuzzy control and machine learning === A prerequisite for the application of classical design methods is an explicit system model. If this is difficult or impossible to represent, fuzzy controllers or machine learning can be considered. By means of fuzzy logic, knowledge acquired by humans can be converted into a control behavior in the form of fuzzy control specifications. Explicit specification of the controller parameters is thus no longer necessary. Approaches using machine learning, moreover, no longer require humans to create the control behavior, but use machine learning as the basis for control. === Whole body control === Due to the high complexity of modern robotic systems, such as humanoid robots, a large number of actuated degrees of freedom must be controlled. In addition, such systems are increasingly used in the direct environment of humans. Accordingly, concepts from force and impedance control are specifically used in this area to increase safety, as this allows the robot to interact with the environment and humans in a compliant manner. == References == == Bibliography == Bruno Siciliano, Luigi Villani (2000), Robot Force Control, Springer, ISBN 0-7923-7733-8 Wolfgang Weber (2002), Industrieroboter. Methoden der Steuerung und Regelung, Fachbuchverlag Leipzig, ISBN 3-446-21604-9 Lorenzo Sciavicco, Bruno Siciliano (1999), Modelling and Control of Robot Manipulators, Springer, ISBN 1-85233-221-2 Klaus Richter (1991), Kraftregelung elastischer Roboter, VDI-Verlag, ISBN 3-18-145908-9
Wikipedia/Force_control
In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force F is constant and the angle θ between the force and the displacement s is also constant, then the work done is given by: W = F s cos ⁡ θ {\displaystyle W=Fs\cos {\theta }} If the force is variable, then work is given by the line integral: W = ∫ F ⋅ d s {\displaystyle W=\int \mathbf {F} \cdot d\mathbf {s} } where d s {\displaystyle d\mathbf {s} } is the tiny change in displacement vector. Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. == History == The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it. === Early concepts of work === Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote: Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's. === Etymology and modern usage === The term work (or mechanical work), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet. Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". The concept of virtual work, and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. == Units == The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889). According to the International Bureau of Weights and Measures it is defined as "the work done when the point of application of 1 MKS unit of force [newton] moves a distance of 1 metre in the direction of the force." The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit. == Work and energy == The work W done by a constant force of magnitude F on a point that moves a displacement s in a straight line in the direction of the force is the product W = F ⋅ s {\displaystyle W=\mathbf {F} \cdot \mathbf {s} } For example, if a force of 10 newtons (F = 10 N) acts along a point that travels 2 metres (s = 2 m), then W = Fs = (10 N) (2 m) = 20 J. This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy Ek corresponding to the linear velocity and angular velocity of that body, W = Δ E k . {\displaystyle W=\Delta E_{\text{k}}.} The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy Ep of the object, W = − Δ E p . {\displaystyle W=-\Delta E_{\text{p}}.} These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. == Constraint forces == Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is F = qv × B, where q is the charge, v is the velocity of the particle, and B is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v. The dot product of two perpendicular vectors is always zero, so the work W = F ⋅ v = 0, and the magnetic force does not do work. It can change the direction of motion but never change the speed. == Mathematical calculation == For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. Work is the result of a force on a point that follows a curve X, with a velocity v, at each instant. The small amount of work δW that occurs over an instant of time dt is calculated as δ W = F ⋅ d s = F ⋅ v d t {\displaystyle \delta W=\mathbf {F} \cdot d\mathbf {s} =\mathbf {F} \cdot \mathbf {v} dt} where the F ⋅ v is the power over the instant dt. The sum of these small amounts of work over the trajectory of the point yields the work, W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F ⋅ d s d t d t = ∫ C F ⋅ d s , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\tfrac {d\mathbf {s} }{dt}}\,dt=\int _{C}\mathbf {F} \cdot d\mathbf {s} ,} where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent. If the force is always directed along this line, and the magnitude of the force is F, then this integral simplifies to W = ∫ C F d s {\displaystyle W=\int _{C}F\,ds} where s is displacement along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to W = ∫ C F d s = F ∫ C d s = F s {\displaystyle W=\int _{C}F\,ds=F\int _{C}ds=Fs} where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F ⋅ ds = F cos θ ds, where θ is the angle between the force vector and the direction of movement, that is W = ∫ C F ⋅ d s = F s cos ⁡ θ . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} =Fs\cos \theta .} When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. === Work done by a variable force === Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (F cos(θ), where θ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. If the displacement as a variable of time is given by ∆x(t), then work done by the variable force from t1 to t2 is: W = ∫ t 1 t 2 F ( t ) ⋅ v ( t ) d t = ∫ t 1 t 2 P ( t ) d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} (t)\cdot \mathbf {v} (t)dt=\int _{t_{1}}^{t_{2}}P(t)dt.} Thus, the work done for a variable force can be expressed as a definite integral of power over time. === Torque and rotation === A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as δ W = T ⋅ ω d t , {\displaystyle \delta W=\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt,} where the T ⋅ ω is the power over the instant dt. The sum of these small amounts of work over the trajectory of the rigid body yields the work, W = ∫ t 1 t 2 T ⋅ ω d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt.} This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent. If the angular velocity vector maintains a constant direction, then it takes the form, ω = ϕ ˙ S , {\displaystyle {\boldsymbol {\omega }}={\dot {\phi }}\mathbf {S} ,} where ϕ {\displaystyle \phi } is the angle of rotation about the constant unit vector S. In this case, the work of the torque becomes, W = ∫ t 1 t 2 T ⋅ ω d t = ∫ t 1 t 2 T ⋅ S d ϕ d t d t = ∫ C T ⋅ S d ϕ , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot \mathbf {S} {\frac {d\phi }{dt}}dt=\int _{C}\mathbf {T} \cdot \mathbf {S} \,d\phi ,} where C is the trajectory from ϕ ( t 1 ) {\displaystyle \phi (t_{1})} to ϕ ( t 2 ) {\displaystyle \phi (t_{2})} . This integral depends on the rotational trajectory ϕ ( t ) {\displaystyle \phi (t)} , and is therefore path-dependent. If the torque τ {\displaystyle \tau } is aligned with the angular velocity vector so that, T = τ S , {\displaystyle \mathbf {T} =\tau \mathbf {S} ,} and both the torque and angular velocity are constant, then the work takes the form, W = ∫ t 1 t 2 τ ϕ ˙ d t = τ ( ϕ 2 − ϕ 1 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\tau {\dot {\phi }}\,dt=\tau (\phi _{2}-\phi _{1}).} This result can be understood more simply by considering the torque as arising from a force of constant magnitude F, being applied perpendicularly to a lever arm at a distance r {\displaystyle r} , as shown in the figure. This force will act through the distance along the circular arc l = s = r ϕ {\displaystyle l=s=r\phi } , so the work done is W = F s = F r ϕ . {\displaystyle W=Fs=Fr\phi .} Introduce the torque τ = Fr, to obtain W = F r ϕ = τ ϕ , {\displaystyle W=Fr\phi =\tau \phi ,} as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. == Work and potential energy == The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C = x(t), defines the work input to the system by the force. === Path dependence === Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral: W = ∫ C F ⋅ d x = ∫ t 1 t 2 F ⋅ v d t , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt,} where dx(t) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, d W d t = P ( t ) = F ⋅ v . {\displaystyle {\frac {dW}{dt}}=P(t)=\mathbf {F} \cdot \mathbf {v} .} === Path independence === If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function U(x), that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = ∫ x ( t 1 ) x ( t 2 ) F ⋅ d x = U ( x ( t 1 ) ) − U ( x ( t 2 ) ) . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{\mathbf {x} (t_{1})}^{\mathbf {x} (t_{2})}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} (t_{1}))-U(\mathbf {x} (t_{2})).} The function U(x) is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle \nabla W=-\nabla U=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential." Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-\nabla U\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} === Work by gravity === In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is g = 9.8 m⋅s−2 and the gravitational force on an object of mass m is Fg = mg. It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight mg is displaced upwards or downwards a vertical distance y2 − y1, the work W done on the object is: W = F g ( y 2 − y 1 ) = F g Δ y = m g Δ y {\displaystyle W=F_{g}(y_{2}-y_{1})=F_{g}\Delta y=mg\Delta y} where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. ==== Gravity in 3D space ==== The force of gravity exerted by a mass M on another mass m is given by F = − G M m r 2 r ^ = − G M m r 3 r , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }}=-{\frac {GMm}{r^{3}}}\mathbf {r} ,} where r is the position vector from M to m and r̂ is the unit vector in the direction of r. Let the mass m move at the velocity v; then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} Notice that the position and velocity of the mass m are given by r = r e r , v = d r d t = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\frac {d\mathbf {r} }{dt}}={\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m, and we use the fact that d e r / d t = θ ˙ e t . {\displaystyle d\mathbf {e} _{r}/dt={\dot {\theta }}\mathbf {e} _{t}.} Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{r})\cdot \left({\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t}\right)dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} The function U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy. === Work by a spring === Consider a spring that exerts a horizontal force F = (−kx, 0, 0) that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve X(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − 1 2 k x 2 . {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} dt=-\int _{0}^{t}kxv_{x}dt=-{\frac {1}{2}}kx^{2}.} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvxdt, over time t is ⁠1/2⁠x2. The work is the product of the distance times the spring force, which is also dependent on distance; hence the x2 result. === Work by a gas === The work W {\displaystyle W} done by a body of gas on its surroundings is: W = ∫ a b P d V {\displaystyle W=\int _{a}^{b}P\,dV} where P is pressure, V is volume, and a and b are initial and final volumes. == Work–energy principle == The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy E k {\displaystyle E_{\text{k}}} , W = Δ E k = 1 2 m v 2 2 − 1 2 m v 1 2 {\displaystyle W=\Delta E_{\text{k}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are the speeds of the particle before and after the work is done, and m is its mass. The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics. This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. === Derivation for a particle moving along a straight line === In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation F = ma (Newton's second law), and the particle displacement s can be expressed by the equation s = v 2 2 − v 1 2 2 a {\displaystyle s={\frac {v_{2}^{2}-v_{1}^{2}}{2a}}} which follows from v 2 2 = v 1 2 + 2 a s {\displaystyle v_{2}^{2}=v_{1}^{2}+2as} (see Equations of motion). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: W = F s = m a s = m a v 2 2 − v 1 2 2 a = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=ma{\frac {v_{2}^{2}-v_{1}^{2}}{2a}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} Other derivation: W = F s = m a s = m v 2 2 − v 1 2 2 s s = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=m{\frac {v_{2}^{2}-v_{1}^{2}}{2s}}s={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F v d t = ∫ t 1 t 2 m a v d t = m ∫ t 1 t 2 v d v d t d t = m ∫ v 1 v 2 v d v = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=\int _{t_{1}}^{t_{2}}F\,v\,dt=\int _{t_{1}}^{t_{2}}ma\,v\,dt=m\int _{t_{1}}^{t_{2}}v\,{\frac {dv}{dt}}\,dt=m\int _{v_{1}}^{v_{2}}v\,dv={\tfrac {1}{2}}m\left(v_{2}^{2}-v_{1}^{2}\right).} === General derivation of the work–energy principle for a particle === For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle: W = ∫ t 1 t 2 F ⋅ v d t = m ∫ t 1 t 2 a ⋅ v d t = m 2 ∫ t 1 t 2 d v 2 d t d t = m 2 ∫ v 1 2 v 2 2 d v 2 = m v 2 2 2 − m v 1 2 2 = Δ E k {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=m\int _{t_{1}}^{t_{2}}\mathbf {a} \cdot \mathbf {v} dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {dv^{2}}{dt}}\,dt={\frac {m}{2}}\int _{v_{1}^{2}}^{v_{2}^{2}}dv^{2}={\frac {mv_{2}^{2}}{2}}-{\frac {mv_{1}^{2}}{2}}=\Delta E_{\text{k}}} The identity a ⋅ v = 1 2 d v 2 d t {\textstyle \mathbf {a} \cdot \mathbf {v} ={\frac {1}{2}}{\frac {dv^{2}}{dt}}} requires some algebra. From the identity v 2 = v ⋅ v {\textstyle v^{2}=\mathbf {v} \cdot \mathbf {v} } and definition a = d v d t {\textstyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}} it follows d v 2 d t = d ( v ⋅ v ) d t = d v d t ⋅ v + v ⋅ d v d t = 2 d v d t ⋅ v = 2 a ⋅ v . {\displaystyle {\frac {dv^{2}}{dt}}={\frac {d(\mathbf {v} \cdot \mathbf {v} )}{dt}}={\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} +\mathbf {v} \cdot {\frac {d\mathbf {v} }{dt}}=2{\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} =2\mathbf {a} \cdot \mathbf {v} .} The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. === Derivation for a particle in constrained movement === In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory X(t) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R, then Newton's Law takes the form F + R = m X ¨ , {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }},} where m is the mass of the particle. ==== Vector formulation ==== Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields F ⋅ X ˙ = m X ¨ ⋅ X ˙ , {\displaystyle \mathbf {F} \cdot {\dot {\mathbf {X} }}=m{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X(t1) to the point X(t2) to obtain ∫ t 1 t 2 F ⋅ X ˙ d t = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t . {\displaystyle \int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt.} The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t1 to time t2. This can also be written as W = ∫ t 1 t 2 F ⋅ X ˙ d t = ∫ X ( t 1 ) X ( t 2 ) F ⋅ d X . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=\int _{\mathbf {X} (t_{1})}^{\mathbf {X} (t_{2})}\mathbf {F} \cdot d\mathbf {X} .} This integral is computed along the trajectory X(t) of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity 1 2 d d t ( X ˙ ⋅ X ˙ ) = X ¨ ⋅ X ˙ , {\displaystyle {\frac {1}{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})={\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, Δ K = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t = m 2 ∫ t 1 t 2 d d t ( X ˙ ⋅ X ˙ ) d t = m 2 X ˙ ⋅ X ˙ ( t 2 ) − m 2 X ˙ ⋅ X ˙ ( t 1 ) = 1 2 m Δ v 2 , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})dt={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{2})-{\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{1})={\frac {1}{2}}m\Delta \mathbf {v} ^{2},} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 X ˙ ⋅ X ˙ = 1 2 m v 2 {\displaystyle K={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}={\frac {1}{2}}m{\mathbf {v} ^{2}}} ==== Tangential and normal components ==== It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X(t), such that X ˙ = v T and X ¨ = v ˙ T + v 2 κ N , {\displaystyle {\dot {\mathbf {X} }}=v\mathbf {T} \quad {\text{and}}\quad {\ddot {\mathbf {X} }}={\dot {v}}\mathbf {T} +v^{2}\kappa \mathbf {N} ,} where v = | X ˙ | = X ˙ ⋅ X ˙ . {\displaystyle v=|{\dot {\mathbf {X} }}|={\sqrt {{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}}}.} Then, the scalar product of velocity with acceleration in Newton's second law takes the form Δ K = m ∫ t 1 t 2 v ˙ v d t = m 2 ∫ t 1 t 2 d d t v 2 d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\dot {v}}v\,dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}v^{2}\,dt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}),} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 v 2 = m 2 X ˙ ⋅ X ˙ . {\displaystyle K={\frac {m}{2}}v^{2}={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}.} The result is the work–energy principle for particle dynamics, W = Δ K . {\displaystyle W=\Delta K.} This derivation can be generalized to arbitrary rigid body systems. === Moving in a straight line (skid to a stop) === Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F. The constraint forces between the vehicle and the road define R, and we have F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} For convenience let the trajectory be along the X-axis, so X = (d, 0) and the velocity is V = (v, 0), then R ⋅ V = 0, and F ⋅ V = Fxv, where Fx is the component of F along the X-axis, so F x v = m v ˙ v . {\displaystyle F_{x}v=m{\dot {v}}v.} Integration of both sides yields ∫ t 1 t 2 F x v d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}F_{x}vdt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} If Fx is constant along the trajectory, then the integral of velocity is distance, so F x ( d ( t 2 ) − d ( t 1 ) ) = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle F_{x}(d(t_{2})-d(t_{1}))={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} As an example consider a car skidding to a stop, where k is the coefficient of friction and w is the weight of the car. Then the force along the trajectory is Fx = −kw. The velocity v of the car can be determined from the length s of the skid using the work–energy principle, k w s = w 2 g v 2 , or v = 2 k s g . {\displaystyle kws={\frac {w}{2g}}v^{2},\quad {\text{or}}\quad v={\sqrt {2ksg}}.} This formula uses the fact that the mass of the vehicle is m = w/g. === Coasting down an inclined surface (gravity racing) === Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V, of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be X(t) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F = (0, 0, w), while the force of the road on the vehicle is the constraint force R. Newton's second law yields, F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} The scalar product of this equation with the velocity, V = (vx, vy, vz), yields w v z = m V ˙ V , {\displaystyle wv_{z}=m{\dot {V}}V,} where V is the magnitude of V. The constraint forces between the vehicle and the road cancel from this equation because R ⋅ V = 0, which means they do no work. Integrate both sides to obtain ∫ t 1 t 2 w v z d t = m 2 V 2 ( t 2 ) − m 2 V 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}wv_{z}dt={\frac {m}{2}}V^{2}(t_{2})-{\frac {m}{2}}V^{2}(t_{1}).} The weight force w is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, w Δ z = m 2 V 2 . {\displaystyle w\Delta z={\frac {m}{2}}V^{2}.} Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least s = Δ z 0.06 = 8.3 V 2 g , or s = 8.3 88 2 32.2 ≈ 2000 f t . {\displaystyle s={\frac {\Delta z}{0.06}}=8.3{\frac {V^{2}}{g}},\quad {\text{or}}\quad s=8.3{\frac {88^{2}}{32.2}}\approx 2000\mathrm {ft} .} This formula uses the fact that the weight of the vehicle is w = mg. == Work of forces acting on a rigid body == The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body. The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n . {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n.} The velocity of the points Xi along their trajectories are V i = ω × ( X i − d ) + d ˙ , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }},} where ω is the angular velocity vector obtained from the skew symmetric matrix [ Ω ] = A ˙ A T , {\displaystyle [\Omega ]={\dot {A}}A^{\mathsf {T}},} known as the angular velocity matrix. The small amount of work by the forces over the small displacements δri can be determined by approximating the displacement by δr = vδt so δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + … + F n ⋅ V n δ t {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\ldots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t} or δ W = ∑ i = 1 n F i ⋅ ( ω × ( X i − d ) + d ˙ ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }})\delta t.} This formula can be rewritten to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d ˙ δ t + ( ∑ i = 1 n ( X i − d ) × F i ) ⋅ ω δ t = ( F ⋅ d ˙ + T ⋅ ω ) δ t , {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot {\dot {\mathbf {d} }}\delta t+\left(\sum _{i=1}^{n}\left(\mathbf {X} _{i}-\mathbf {d} \right)\times \mathbf {F} _{i}\right)\cdot {\boldsymbol {\omega }}\delta t=\left(\mathbf {F} \cdot {\dot {\mathbf {d} }}+\mathbf {T} \cdot {\boldsymbol {\omega }}\right)\delta t,} where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body. == References == == Bibliography == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd ed., extended version ed.). W. H. Freeman. ISBN 0-87901-432-6. == External links == Work–energy principle
Wikipedia/Work_energy_theorem
Buoyancy (), or upthrust, is the force exerted by a fluid opposing the weight of a partially or fully immersed object (which may be also be a parcel of fluid). In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid. For this reason, an object with average density greater than the surrounding fluid tends to sink because its weight is greater than the weight of the fluid it displaces. If the object is less dense, buoyancy can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction. Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water. Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist. The center of buoyancy of an object is the center of gravity of the displaced volume of fluid. == Archimedes' principle == Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces: Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object —with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object. Mathematically we note. F B = − F g = − ρ V g {\displaystyle \mathbf {F_{B}} =-\mathbf {F_{g}} =-\rho V{\textbf {g}}} Where g {\displaystyle \mathbf {g} } is the local gravitational acceleration, ρ {\displaystyle \rho } the density of the fluid, and V {\displaystyle V} the displaced volume; the negative sign arises since the buoyant force acts in the opposite direction as the object's weight. Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle F B = − F g {\displaystyle \mathbf {F_{B}} =-\mathbf {F_{g}} } remains valid. It's important to note that the density of an object is defined to be its mass per unit volume. ρ = M V {\displaystyle \rho ={\frac {M}{V}}} If an object is fully submerged and we assume that the net force acting upon the object in the vertical direction is zero. If fully submerged the displaced volume is simply the volume of the object. F net = 0 = F B − F g = ρ fluid V g − ρ obj V g ⟹ ρ fluid = ρ obj {\displaystyle F_{\text{net}}=0=F_{B}-F_{g}=\rho _{\text{fluid}}Vg-\rho _{\text{obj}}Vg\implies \rho _{\text{fluid}}=\rho _{\text{obj}}} This implies that objects of greater density than the fluid will sink, and objects of lesser density will float. Example: If you drop wood into water, buoyancy will keep it afloat. == Applications == A common application Archimedes' principle is of hydrostatic weighing. Suppose we can measure the tension of a hanging mass by a force probe. Assuming Archimedes' principle, when the mass is submerged in the fluid and the net force is zero. F net = 0 = F B + F T − F g = ρ fluid V g + F T − m g {\displaystyle F_{\text{net}}=0=F_{B}+F_{T}-F_{g}=\rho _{\text{fluid}}Vg+F_{T}-mg} ⟹ V = m g − F T ρ fluid g {\displaystyle \implies V={\frac {mg-F_{T}}{\rho _{\text{fluid}}g}}} Recall that the definition of density states. ρ obj = m V = m g ρ fluid m g − F T {\displaystyle \rho _{\text{obj}}={\frac {m}{V}}={\frac {mg\rho _{\text{fluid}}}{mg-F_{T}}}} Thus, the density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes. Below we can denote the ratio of densities. ρ obj ρ fluid = F g F g − F app {\displaystyle {\frac {\rho _{\text{obj}}}{\rho _{\text{fluid}}}}={\frac {F_{g}}{F_{g}-F_{\text{app}}}}\,} This formula is also used for example in describing the measuring principle of a dasymeter. == Forces and equilibrium == The equation to calculate the pressure inside a fluid in equilibrium is: f + div σ = 0 {\displaystyle \mathbf {f} +\operatorname {div} \,\sigma =0} where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor: σ i j = − p δ i j . {\displaystyle \sigma _{ij}=-p\delta _{ij}.\,} Here δij is the Kronecker delta. Using this the above equation becomes: f = ∇ p . {\displaystyle \mathbf {f} =\nabla p.\,} Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function: f = − ∇ Φ . {\displaystyle \mathbf {f} =-\nabla \Phi .\,} Then: ∇ ( p + Φ ) = 0 ⟹ p + Φ = constant . {\displaystyle \nabla (p+\Phi )=0\Longrightarrow p+\Phi ={\text{constant}}.\,} Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is p = ρ f g z . {\displaystyle p=\rho _{f}gz.\,} So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force. The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid: B = ∮ σ d A . {\displaystyle \mathbf {B} =\oint \sigma \,d\mathbf {A} .} The surface integral can be transformed into a volume integral with the help of the Gauss theorem: B = ∫ div ⁡ σ d V = − ∫ f d V = − ρ f g ∫ d V = − ρ f g V {\displaystyle \mathbf {B} =\int \operatorname {div} \sigma \,dV=-\int \mathbf {f} \,dV=-\rho _{f}\mathbf {g} \int \,dV=-\rho _{f}\mathbf {g} V} where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it. The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude: B = ρ f V disp g , {\displaystyle B=\rho _{f}V_{\text{disp}}\,g,\,} where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question. If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to B = ρ f V g . {\displaystyle B=\rho _{f}Vg.\,} Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally. The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight F net = 0 = m g − ρ f V disp g {\displaystyle F_{\text{net}}=0=mg-\rho _{f}V_{\text{disp}}g\,} If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor. In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore; m g = ρ f V disp g , {\displaystyle mg=\rho _{f}V_{\text{disp}}g,\,} and therefore m = ρ f V disp . {\displaystyle m=\rho _{f}V_{\text{disp}}.\,} showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location. (Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.) It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined. If the object would otherwise float, the tension to restrain it fully submerged is: T = ρ f V g − m g . {\displaystyle T=\rho _{f}Vg-mg.\,} When a sinking object settles on the solid floor, it experiences a normal force of: N = m g − ρ f V g . {\displaystyle N=mg-\rho _{f}Vg.\,} Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies: Buoyancy force = weight of object in empty space − weight of object immersed in fluid The final result would be measured in Newtons. Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam). === Simplified model === A simplified explanation for the integration of the pressure over the contact area may be stated as follows: Consider a cube immersed in a fluid with the upper surface horizontal. The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side. There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero. The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface. Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface. As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence. This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces. This analogy is valid for variations in the size of the cube. If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes. An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence. Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way. === Static stability === A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up. Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral). Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'. The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position. == Fluids and objects == As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased. === Compressible objects === As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands. If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation. ==== Submarines ==== Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion. ==== Balloons ==== The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking. ==== Divers ==== Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies. == Density == If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise. If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position. An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink. A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water. == See also == == References == == External links == Falling in Water W. H. Besant (1889) Elementary Hydrostatics from Google Books. NASA's definition of buoyancy
Wikipedia/Buoyant_force
The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relatively to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis.: 279 : 261  It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass and distance from the axis. It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis. For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other. == Introduction == When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units. The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by m r 2 {\displaystyle mr^{2}} , where r {\displaystyle r} is the distance of the point from the axis, and m {\displaystyle m} is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object. In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law. The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor. The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw. == Definition == The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section. The moment of inertia I is also defined as the ratio of the net angular momentum L of a system to its angular velocity ω around a principal axis, that is I = L ω . {\displaystyle I={\frac {L}{\omega }}.} If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster. If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque τ on a body to the angular acceleration α around a principal axis, that is: 279 : 261, eq.9-19  τ = I α . {\displaystyle \tau =I\alpha .} For a simple pendulum, this definition yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as, I = m r 2 . {\displaystyle I=mr^{2}.} Thus, the moment of inertia of the pendulum depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation. This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses dm each multiplied by the square of its perpendicular distance r to an axis k. An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass. In general, given an object of mass m, an effective radius k can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is I = m k 2 , {\displaystyle I=mk^{2},} where k is known as the radius of gyration around the axis. == Examples == === Simple pendulum === Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle m {\displaystyle m} with the square of its distance r {\displaystyle r} to the pivot, that is I = m r 2 . {\displaystyle I=mr^{2}.} This can be shown as follows: The force of gravity on the mass of a simple pendulum generates a torque τ = r × F {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} } around the axis perpendicular to the plane of the pendulum movement. Here r {\displaystyle \mathbf {r} } is the distance vector from the torque axis to the pendulum center of mass, and F {\displaystyle \mathbf {F} } is the net force on the mass. Associated with this torque is an angular acceleration, α {\displaystyle {\boldsymbol {\alpha }}} , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is a = α × r {\displaystyle \mathbf {a} ={\boldsymbol {\alpha }}\times \mathbf {r} } . Since F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } the torque equation becomes: τ = r × F = r × ( m α × r ) = m ( ( r ⋅ r ) α − ( r ⋅ α ) r ) = m r 2 α = I α k ^ , {\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\mathbf {r} \times \mathbf {F} =\mathbf {r} \times (m{\boldsymbol {\alpha }}\times \mathbf {r} )\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\alpha }}-\left(\mathbf {r} \cdot {\boldsymbol {\alpha }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\alpha }}=I\alpha \mathbf {\hat {k}} ,\end{aligned}}} where k ^ {\displaystyle \mathbf {\hat {k}} } is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of α {\displaystyle {\boldsymbol {\alpha }}} and r {\displaystyle \mathbf {r} } .) The quantity I = m r 2 {\displaystyle I=mr^{2}} is the moment of inertia of this single mass around the pivot point. The quantity I = m r 2 {\displaystyle I=mr^{2}} also appears in the angular momentum of a simple pendulum, which is calculated from the velocity v = ω × r {\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {r} } of the pendulum mass around the pivot, where ω {\displaystyle {\boldsymbol {\omega }}} is the angular velocity of the mass about the pivot point. This angular momentum is given by L = r × p = r × ( m ω × r ) = m ( ( r ⋅ r ) ω − ( r ⋅ ω ) r ) = m r 2 ω = I ω k ^ , {\displaystyle {\begin{aligned}\mathbf {L} &=\mathbf {r} \times \mathbf {p} =\mathbf {r} \times \left(m{\boldsymbol {\omega }}\times \mathbf {r} \right)\\&=m\left(\left(\mathbf {r} \cdot \mathbf {r} \right){\boldsymbol {\omega }}-\left(\mathbf {r} \cdot {\boldsymbol {\omega }}\right)\mathbf {r} \right)\\&=mr^{2}{\boldsymbol {\omega }}=I\omega \mathbf {\hat {k}} ,\end{aligned}}} using a similar derivation to the previous equation. Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield E K = 1 2 m v ⋅ v = 1 2 ( m r 2 ) ω 2 = 1 2 I ω 2 . {\displaystyle E_{\text{K}}={\frac {1}{2}}m\mathbf {v} \cdot \mathbf {v} ={\frac {1}{2}}\left(mr^{2}\right)\omega ^{2}={\frac {1}{2}}I\omega ^{2}.} This shows that the quantity I = m r 2 {\displaystyle I=mr^{2}} is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values m r 2 {\displaystyle mr^{2}} for all of the elements of mass in the body. === Compound pendulums === A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of.: 395–396 : 51–53  The natural frequency ( ω n {\displaystyle \omega _{\text{n}}} ) of a compound pendulum depends on its moment of inertia, I P {\displaystyle I_{P}} , ω n = m g r I P , {\displaystyle \omega _{\text{n}}={\sqrt {\frac {mgr}{I_{P}}}},} where m {\displaystyle m} is the mass of the object, g {\displaystyle g} is local acceleration of gravity, and r {\displaystyle r} is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.: 516–517  Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point P {\displaystyle P} so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation ( t {\displaystyle t} ), to obtain I P = m g r ω n 2 = m g r t 2 4 π 2 , {\displaystyle I_{P}={\frac {mgr}{\omega _{\text{n}}^{2}}}={\frac {mgrt^{2}}{4\pi ^{2}}},} where t {\displaystyle t} is the period (duration) of oscillation (usually averaged over multiple periods). ==== Center of oscillation ==== A simple pendulum that has the same natural frequency as a compound pendulum defines the length L {\displaystyle L} from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length L {\displaystyle L} is determined from the formula, ω n = g L = m g r I P , {\displaystyle \omega _{\text{n}}={\sqrt {\frac {g}{L}}}={\sqrt {\frac {mgr}{I_{P}}}},} or L = g ω n 2 = I P m r . {\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}={\frac {I_{P}}{mr}}.} The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of π r a d / s {\displaystyle \pi \ \mathrm {rad/s} } for the pendulum. In this case, the distance to the center of oscillation, L {\displaystyle L} , can be computed to be L = g ω n 2 ≈ 9.81 m / s 2 ( 3.14 r a d / s ) 2 ≈ 0.99 m . {\displaystyle L={\frac {g}{\omega _{\text{n}}^{2}}}\approx {\frac {9.81\ \mathrm {m/s^{2}} }{(3.14\ \mathrm {rad/s} )^{2}}}\approx 0.99\ \mathrm {m} .} Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter. == Measuring moment of inertia == The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system. == Moment of inertia of area == Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia. These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis I x x {\displaystyle I_{xx}} and horizontal moment of the y-axis I y y {\displaystyle I_{yy}} . Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived, r {\displaystyle r} === Sectional areas moment calculated thus === Source: Square: I x x = I y y = b 4 12 {\displaystyle I_{xx}=I_{yy}={\frac {b^{4}}{12}}} Rectangular: I x x = b h 3 12 {\displaystyle I_{xx}={\frac {bh^{3}}{12}}} and; I y y = h b 3 12 {\displaystyle I_{yy}={\frac {hb^{3}}{12}}} Triangular: I x x = b h 3 36 {\displaystyle I_{xx}={\frac {bh^{3}}{36}}} Circular: I x x = I y y = 1 4 π r 4 = 1 64 π d 4 {\displaystyle I_{xx}=I_{yy}={\frac {1}{4}}{\pi }r^{4}={\frac {1}{64}}{\pi }d^{4}} == Motion in a fixed plane == === Point mass === The moment of inertia about an axis of a body is calculated by summing m r 2 {\displaystyle mr^{2}} for every particle in the body, where r {\displaystyle r} is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.) Consider the kinetic energy of an assembly of N {\displaystyle N} masses m i {\displaystyle m_{i}} that lie at the distances r i {\displaystyle r_{i}} from the pivot point P {\displaystyle P} , which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,: 516–517 : 1084–1085 : 1296–1300  E K = ∑ i = 1 N 1 2 m i v i ⋅ v i = ∑ i = 1 N 1 2 m i ( ω r i ) 2 = 1 2 ω 2 ∑ i = 1 N m i r i 2 . {\displaystyle E_{\text{K}}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}=\sum _{i=1}^{N}{\frac {1}{2}}\,m_{i}\left(\omega r_{i}\right)^{2}={\frac {1}{2}}\,\omega ^{2}\sum _{i=1}^{N}m_{i}r_{i}^{2}.} This shows that the moment of inertia of the body is the sum of each of the m r 2 {\displaystyle mr^{2}} terms, that is I P = ∑ i = 1 N m i r i 2 . {\displaystyle I_{P}=\sum _{i=1}^{N}m_{i}r_{i}^{2}.} Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia. The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows: I P = ∑ i m i r i 2 {\displaystyle I_{P}=\sum _{i}m_{i}r_{i}^{2}} Another expression replaces the summation with an integral, I P = ∭ Q ρ ( x , y , z ) ‖ r ‖ 2 d V {\displaystyle I_{P}=\iiint _{Q}\rho (x,y,z)\left\|\mathbf {r} \right\|^{2}dV} Here, the function ρ {\displaystyle \rho } gives the mass density at each point ( x , y , z ) {\displaystyle (x,y,z)} , r {\displaystyle \mathbf {r} } is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point ( x , y , z ) {\displaystyle (x,y,z)} in the solid, and the integration is evaluated over the volume V {\displaystyle V} of the body Q {\displaystyle Q} . The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area. Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the z {\displaystyle z} -axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the x {\displaystyle x} - and y {\displaystyle y} -axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the x {\displaystyle x} -axis or y {\displaystyle y} -axis depending on the load. ==== Examples ==== The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass. The moment of inertia of a thin rod with constant cross-section s {\displaystyle s} and density ρ {\displaystyle \rho } and with length ℓ {\displaystyle \ell } about a perpendicular axis through its center of mass is determined by integration.: 1301  Align the x {\displaystyle x} -axis with the rod and locate the origin its center of mass at the center of the rod, then I C , rod = ∭ Q ρ x 2 d V = ∫ − ℓ 2 ℓ 2 ρ x 2 s d x = ρ s x 3 3 | − ℓ 2 ℓ 2 = ρ s 3 ( ℓ 3 8 + ℓ 3 8 ) = m ℓ 2 12 , {\displaystyle I_{C,{\text{rod}}}=\iiint _{Q}\rho \,x^{2}\,dV=\int _{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}\rho \,x^{2}s\,dx=\left.\rho s{\frac {x^{3}}{3}}\right|_{-{\frac {\ell }{2}}}^{\frac {\ell }{2}}={\frac {\rho s}{3}}\left({\frac {\ell ^{3}}{8}}+{\frac {\ell ^{3}}{8}}\right)={\frac {m\ell ^{2}}{12}},} where m = ρ s ℓ {\displaystyle m=\rho s\ell } is the mass of the rod. The moment of inertia of a thin disc of constant thickness s {\displaystyle s} , radius R {\displaystyle R} , and density ρ {\displaystyle \rho } about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration.: 1301  Align the z {\displaystyle z} -axis with the axis of the disc and define a volume element as d V = s r d r d θ {\displaystyle dV=sr\,dr\,d\theta } , then I C , disc = ∭ Q ρ r 2 d V = ∫ 0 2 π ∫ 0 R ρ r 2 s r d r d θ = 2 π ρ s R 4 4 = 1 2 m R 2 , {\displaystyle I_{C,{\text{disc}}}=\iiint _{Q}\rho \,r^{2}\,dV=\int _{0}^{2\pi }\int _{0}^{R}\rho r^{2}sr\,dr\,d\theta =2\pi \rho s{\frac {R^{4}}{4}}={\frac {1}{2}}mR^{2},} where m = π R 2 ρ s {\displaystyle m=\pi R^{2}\rho s} is its mass. The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point P {\displaystyle P} as, I P = I C , rod + M rod ( L 2 ) 2 + I C , disc + M disc ( L + R ) 2 , {\displaystyle I_{P}=I_{C,{\text{rod}}}+M_{\text{rod}}\left({\frac {L}{2}}\right)^{2}+I_{C,{\text{disc}}}+M_{\text{disc}}(L+R)^{2},} where L {\displaystyle L} is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum. A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly. As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation: 1301  x 2 + y 2 + z 2 = R 2 , {\displaystyle x^{2}+y^{2}+z^{2}=R^{2},} then the square of the radius r {\displaystyle r} of the disc at the cross-section z {\displaystyle z} along the z {\displaystyle z} -axis is r ( z ) 2 = x 2 + y 2 = R 2 − z 2 . {\displaystyle r(z)^{2}=x^{2}+y^{2}=R^{2}-z^{2}.} Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the z {\displaystyle z} -axis, I C , sphere = ∫ − R R 1 2 π ρ r ( z ) 4 d z = ∫ − R R 1 2 π ρ ( R 2 − z 2 ) 2 d z = 1 2 π ρ [ R 4 z − 2 3 R 2 z 3 + 1 5 z 5 ] − R R = π ρ ( 1 − 2 3 + 1 5 ) R 5 = 2 5 m R 2 , {\displaystyle {\begin{aligned}I_{C,{\text{sphere}}}&=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho r(z)^{4}\,dz=\int _{-R}^{R}{\tfrac {1}{2}}\pi \rho \left(R^{2}-z^{2}\right)^{2}\,dz\\[1ex]&={\tfrac {1}{2}}\pi \rho \left[R^{4}z-{\tfrac {2}{3}}R^{2}z^{3}+{\tfrac {1}{5}}z^{5}\right]_{-R}^{R}\\[1ex]&=\pi \rho \left(1-{\tfrac {2}{3}}+{\tfrac {1}{5}}\right)R^{5}\\[1ex]&={\tfrac {2}{5}}mR^{2},\end{aligned}}} where m = 4 3 π R 3 ρ {\textstyle m={\frac {4}{3}}\pi R^{3}\rho } is the mass of the sphere. === Rigid body === If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis k ^ {\displaystyle \mathbf {\hat {k}} } parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles. If a system of n {\displaystyle n} particles, P i , i = 1 , … , n {\displaystyle P_{i},i=1,\dots ,n} , are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point R {\displaystyle \mathbf {R} } , and absolute velocities v i {\displaystyle \mathbf {v} _{i}} : Δ r i = r i − R , v i = ω × ( r i − R ) + V = ω × Δ r i + V , {\displaystyle {\begin{aligned}\Delta \mathbf {r} _{i}&=\mathbf {r} _{i}-\mathbf {R} ,\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {V} ={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} ,\end{aligned}}} where ω {\displaystyle {\boldsymbol {\omega }}} is the angular velocity of the system and V {\displaystyle \mathbf {V} } is the velocity of R {\displaystyle \mathbf {R} } . For planar movement the angular velocity vector is directed along the unit vector k {\displaystyle \mathbf {k} } which is perpendicular to the plane of movement. Introduce the unit vectors e i {\displaystyle \mathbf {e} _{i}} from the reference point R {\displaystyle \mathbf {R} } to a point r i {\displaystyle \mathbf {r} _{i}} , and the unit vector t ^ i = k ^ × e ^ i {\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}} , so e ^ i = Δ r i Δ r i , k ^ = ω ω , t ^ i = k ^ × e ^ i , v i = ω × Δ r i + V = ω k ^ × Δ r i e ^ i + V = ω Δ r i t ^ i + V {\displaystyle {\begin{aligned}\mathbf {\hat {e}} _{i}&={\frac {\Delta \mathbf {r} _{i}}{\Delta r_{i}}},\quad \mathbf {\hat {k}} ={\frac {\boldsymbol {\omega }}{\omega }},\quad \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i},\\\mathbf {v} _{i}&={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} =\omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {V} =\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \end{aligned}}} This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane. Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement. ==== Angular momentum ==== The angular momentum vector for the planar movement of a rigid system of particles is given by L = ∑ i = 1 n m i Δ r i × v i = ∑ i = 1 n m i Δ r i e ^ i × ( ω Δ r i t ^ i + V ) = ( ∑ i = 1 n m i Δ r i 2 ) ω k ^ + ( ∑ i = 1 n m i Δ r i e ^ i ) × V . {\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\omega \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {V} .\end{aligned}}} Use the center of mass C {\displaystyle \mathbf {C} } as the reference point so Δ r i e ^ i = r i − C , ∑ i = 1 n m i Δ r i e ^ i = 0 , {\displaystyle {\begin{aligned}\Delta r_{i}\mathbf {\hat {e}} _{i}&=\mathbf {r} _{i}-\mathbf {C} ,\\\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}&=0,\end{aligned}}} and define the moment of inertia relative to the center of mass I C {\displaystyle I_{\mathbf {C} }} as I C = ∑ i m i Δ r i 2 , {\displaystyle I_{\mathbf {C} }=\sum _{i}m_{i}\,\Delta r_{i}^{2},} then the equation for angular momentum simplifies to: 1028  L = I C ω k ^ . {\displaystyle \mathbf {L} =I_{\mathbf {C} }\omega \mathbf {\hat {k}} .} The moment of inertia I C {\displaystyle I_{\mathbf {C} }} about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole). For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body. ==== Kinetic energy ==== The kinetic energy of a rigid system of particles moving in the plane is given by E K = 1 2 ∑ i = 1 n m i v i ⋅ v i , = 1 2 ∑ i = 1 n m i ( ω Δ r i t ^ i + V ) ⋅ ( ω Δ r i t ^ i + V ) , = 1 2 ω 2 ( ∑ i = 1 n m i Δ r i 2 t ^ i ⋅ t ^ i ) + ω V ⋅ ( ∑ i = 1 n m i Δ r i t ^ i ) + 1 2 ( ∑ i = 1 n m i ) V ⋅ V . {\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i},\\&={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right)\cdot \left(\omega \,\Delta r_{i}\mathbf {\hat {t}} _{i}+\mathbf {V} \right),\\&={\frac {1}{2}}\omega ^{2}\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\mathbf {\hat {t}} _{i}\cdot \mathbf {\hat {t}} _{i}\right)+\omega \mathbf {V} \cdot \left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {t}} _{i}\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} \cdot \mathbf {V} .\end{aligned}}} Let the reference point be the center of mass C {\displaystyle \mathbf {C} } of the system so the second term becomes zero, and introduce the moment of inertia I C {\displaystyle I_{\mathbf {C} }} so the kinetic energy is given by: 1084  E K = 1 2 I C ω 2 + 1 2 M V ⋅ V . {\displaystyle E_{\text{K}}={\frac {1}{2}}I_{\mathbf {C} }\omega ^{2}+{\frac {1}{2}}M\mathbf {V} \cdot \mathbf {V} .} The moment of inertia I C {\displaystyle I_{\mathbf {C} }} is the polar moment of inertia of the body. ==== Newton's laws ==== Newton's laws for a rigid system of n {\displaystyle n} particles, P i , i = 1 , … , n {\displaystyle P_{i},i=1,\dots ,n} , can be written in terms of a resultant force and torque at a reference point R {\displaystyle \mathbf {R} } , to yield F = ∑ i = 1 n m i A i , τ = ∑ i = 1 n Δ r i × m i A i , {\displaystyle {\begin{aligned}\mathbf {F} &=\sum _{i=1}^{n}m_{i}\mathbf {A} _{i},\\{\boldsymbol {\tau }}&=\sum _{i=1}^{n}\Delta \mathbf {r} _{i}\times m_{i}\mathbf {A} _{i},\end{aligned}}} where r i {\displaystyle \mathbf {r} _{i}} denotes the trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle P i {\displaystyle P_{i}} in terms of the position R {\displaystyle \mathbf {R} } and acceleration A {\displaystyle \mathbf {A} } of the reference particle as well as the angular velocity vector ω {\displaystyle {\boldsymbol {\omega }}} and angular acceleration vector α {\displaystyle {\boldsymbol {\alpha }}} of the rigid system of particles as, A i = α × Δ r i + ω × ω × Δ r i + A . {\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times \Delta \mathbf {r} _{i}+{\boldsymbol {\omega }}\times {\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {A} .} For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k ^ {\displaystyle \mathbf {\hat {k}} } perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors e ^ i {\displaystyle \mathbf {\hat {e}} _{i}} from the reference point R {\displaystyle \mathbf {R} } to a point r i {\displaystyle \mathbf {r} _{i}} and the unit vectors t ^ i = k ^ × e ^ i {\displaystyle \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} \times \mathbf {\hat {e}} _{i}} , so A i = α k ^ × Δ r i e ^ i − ω k ^ × ω k ^ × Δ r i e ^ i + A = α Δ r i t ^ i − ω 2 Δ r i e ^ i + A . {\displaystyle {\begin{aligned}\mathbf {A} _{i}&=\alpha \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}-\omega \mathbf {\hat {k}} \times \omega \mathbf {\hat {k}} \times \Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \\&=\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} .\end{aligned}}} This yields the resultant torque on the system as τ = ∑ i = 1 n m i Δ r i e ^ i × ( α Δ r i t ^ i − ω 2 Δ r i e ^ i + A ) = ( ∑ i = 1 n m i Δ r i 2 ) α k ^ + ( ∑ i = 1 n m i Δ r i e ^ i ) × A , {\displaystyle {\begin{aligned}{\boldsymbol {\tau }}&=\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\times \left(\alpha \Delta r_{i}\mathbf {\hat {t}} _{i}-\omega ^{2}\Delta r_{i}\mathbf {\hat {e}} _{i}+\mathbf {A} \right)\\&=\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}^{2}\right)\alpha \mathbf {\hat {k}} +\left(\sum _{i=1}^{n}m_{i}\,\Delta r_{i}\mathbf {\hat {e}} _{i}\right)\times \mathbf {A} ,\end{aligned}}} where e ^ i × e ^ i = 0 {\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {e}} _{i}=\mathbf {0} } , and e ^ i × t ^ i = k ^ {\displaystyle \mathbf {\hat {e}} _{i}\times \mathbf {\hat {t}} _{i}=\mathbf {\hat {k}} } is the unit vector perpendicular to the plane for all of the particles P i {\displaystyle P_{i}} . Use the center of mass C {\displaystyle \mathbf {C} } as the reference point and define the moment of inertia relative to the center of mass I C {\displaystyle I_{\mathbf {C} }} , then the equation for the resultant torque simplifies to: 1029  τ = I C α k ^ . {\displaystyle {\boldsymbol {\tau }}=I_{\mathbf {C} }\alpha \mathbf {\hat {k}} .} == Motion in space of a rigid body, and the inertia matrix == The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles. Let the system of n {\displaystyle n} particles, P i , i = 1 , … , n {\displaystyle P_{i},i=1,\dots ,n} be located at the coordinates r i {\displaystyle \mathbf {r} _{i}} with velocities v i {\displaystyle \mathbf {v} _{i}} relative to a fixed reference frame. For a (possibly moving) reference point R {\displaystyle \mathbf {R} } , the relative positions are Δ r i = r i − R {\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} } and the (absolute) velocities are v i = ω × Δ r i + V R {\displaystyle \mathbf {v} _{i}={\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }} where ω {\displaystyle {\boldsymbol {\omega }}} is the angular velocity of the system, and V R {\displaystyle \mathbf {V_{R}} } is the velocity of R {\displaystyle \mathbf {R} } . === Angular momentum === Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, [ b ] {\displaystyle \left[\mathbf {b} \right]} , constructed from the components of b = ( b x , b y , b z ) {\displaystyle \mathbf {b} =(b_{x},b_{y},b_{z})} : b × y ≡ [ b ] y [ b ] ≡ [ 0 − b z b y b z 0 − b x − b y b x 0 ] . {\displaystyle {\begin{aligned}\mathbf {b} \times \mathbf {y} &\equiv \left[\mathbf {b} \right]\mathbf {y} \\\left[\mathbf {b} \right]&\equiv {\begin{bmatrix}0&-b_{z}&b_{y}\\b_{z}&0&-b_{x}\\-b_{y}&b_{x}&0\end{bmatrix}}.\end{aligned}}} The inertia matrix is constructed by considering the angular momentum, with the reference point R {\displaystyle \mathbf {R} } of the body chosen to be the center of mass C {\displaystyle \mathbf {C} } : L = ∑ i = 1 n m i Δ r i × v i = ∑ i = 1 n m i Δ r i × ( ω × Δ r i + V R ) = ( − ∑ i = 1 n m i Δ r i × ( Δ r i × ω ) ) + ( ∑ i = 1 n m i Δ r i × V R ) , {\displaystyle {\begin{aligned}\mathbf {L} &=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {v} _{i}\\&=\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {R} }\right)\\&=\left(-\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \left(\Delta \mathbf {r} _{i}\times {\boldsymbol {\omega }}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\,\Delta \mathbf {r} _{i}\times \mathbf {V} _{\mathbf {R} }\right),\end{aligned}}} where the terms containing V R {\displaystyle \mathbf {V_{R}} } ( = C {\displaystyle =\mathbf {C} } ) sum to zero by the definition of center of mass. Then, the skew-symmetric matrix [ Δ r i ] {\displaystyle [\Delta \mathbf {r} _{i}]} obtained from the relative position vector Δ r i = r i − C {\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} } , can be used to define, L = ( − ∑ i = 1 n m i [ Δ r i ] 2 ) ω = I C ω , {\displaystyle \mathbf {L} =\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},} where I C {\displaystyle \mathbf {I_{C}} } defined by I C = − ∑ i = 1 n m i [ Δ r i ] 2 , {\displaystyle \mathbf {I} _{\mathbf {C} }=-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2},} is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass C {\displaystyle \mathbf {C} } . === Kinetic energy === The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of n {\displaystyle n} particles P i , i = 1 , … , n {\displaystyle P_{i},i=1,\dots ,n} be located at the coordinates r i {\displaystyle \mathbf {r} _{i}} with velocities v i {\displaystyle \mathbf {v} _{i}} , then the kinetic energy is E K = 1 2 ∑ i = 1 n m i v i ⋅ v i = 1 2 ∑ i = 1 n m i ( ω × Δ r i + V C ) ⋅ ( ω × Δ r i + V C ) , {\displaystyle E_{\text{K}}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\mathbf {v} _{i}\cdot \mathbf {v} _{i}={\frac {1}{2}}\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}+\mathbf {V} _{\mathbf {C} }\right),} where Δ r i = r i − C {\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {C} } is the position vector of a particle relative to the center of mass. This equation expands to yield three terms E K = 1 2 ( ∑ i = 1 n m i ( ω × Δ r i ) ⋅ ( ω × Δ r i ) ) + ( ∑ i = 1 n m i V C ⋅ ( ω × Δ r i ) ) + 1 2 ( ∑ i = 1 n m i V C ⋅ V C ) . {\displaystyle E_{\text{K}}={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\right).} Since the center of mass is defined by ∑ i = 1 n m i Δ r i = 0 {\displaystyle \sum _{i=1}^{n}m_{i}\Delta \mathbf {r} _{i}=0} , the second term in this equation is zero. Introduce the skew-symmetric matrix [ Δ r i ] {\displaystyle [\Delta \mathbf {r} _{i}]} so the kinetic energy becomes E K = 1 2 ( ∑ i = 1 n m i ( [ Δ r i ] ω ) ⋅ ( [ Δ r i ] ω ) ) + 1 2 ( ∑ i = 1 n m i ) V C ⋅ V C = 1 2 ( ∑ i = 1 n m i ( ω T [ Δ r i ] T [ Δ r i ] ω ) ) + 1 2 ( ∑ i = 1 n m i ) V C ⋅ V C = 1 2 ω ⋅ ( − ∑ i = 1 n m i [ Δ r i ] 2 ) ω + 1 2 ( ∑ i = 1 n m i ) V C ⋅ V C . {\displaystyle {\begin{aligned}E_{\text{K}}&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\cdot \left(\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\left({\boldsymbol {\omega }}^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]^{\mathsf {T}}\left[\Delta \mathbf {r} _{i}\right]{\boldsymbol {\omega }}\right)\right)+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }\\&={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}+{\frac {1}{2}}\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} _{\mathbf {C} }\cdot \mathbf {V} _{\mathbf {C} }.\end{aligned}}} Thus, the kinetic energy of the rigid system of particles is given by E K = 1 2 ω ⋅ I C ω + 1 2 M V C 2 . {\displaystyle E_{\text{K}}={\frac {1}{2}}{\boldsymbol {\omega }}\cdot \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }}+{\frac {1}{2}}M\mathbf {V} _{\mathbf {C} }^{2}.} where I C {\displaystyle \mathbf {I_{C}} } is the inertia matrix relative to the center of mass and M {\displaystyle M} is the total mass. === Resultant torque === The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is, τ = ∑ i = 1 n ( r i − R ) × m i a i , {\displaystyle {\boldsymbol {\tau }}=\sum _{i=1}^{n}\left(\mathbf {r_{i}} -\mathbf {R} \right)\times m_{i}\mathbf {a} _{i},} where a i {\displaystyle \mathbf {a} _{i}} is the acceleration of the particle P i {\displaystyle P_{i}} . The kinematics of a rigid body yields the formula for the acceleration of the particle P i {\displaystyle P_{i}} in terms of the position R {\displaystyle \mathbf {R} } and acceleration A R {\displaystyle \mathbf {A} _{\mathbf {R} }} of the reference point, as well as the angular velocity vector ω {\displaystyle {\boldsymbol {\omega }}} and angular acceleration vector α {\displaystyle {\boldsymbol {\alpha }}} of the rigid system as, a i = α × ( r i − R ) + ω × ( ω × ( r i − R ) ) + A R . {\displaystyle \mathbf {a} _{i}={\boldsymbol {\alpha }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)+{\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\mathbf {A} _{\mathbf {R} }.} Use the center of mass C {\displaystyle \mathbf {C} } as the reference point, and introduce the skew-symmetric matrix [ Δ r i ] = [ r i − C ] {\displaystyle \left[\Delta \mathbf {r} _{i}\right]=\left[\mathbf {r} _{i}-\mathbf {C} \right]} to represent the cross product ( r i − C ) × {\displaystyle (\mathbf {r} _{i}-\mathbf {C} )\times } , to obtain τ = ( − ∑ i = 1 n m i [ Δ r i ] 2 ) α + ω × ( − ∑ i = 1 n m i [ Δ r i ] 2 ) ω {\displaystyle {\boldsymbol {\tau }}=\left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \left(-\sum _{i=1}^{n}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right){\boldsymbol {\omega }}} The calculation uses the identity Δ r i × ( ω × ( ω × Δ r i ) ) + ω × ( ( ω × Δ r i ) × Δ r i ) = 0 , {\displaystyle \Delta \mathbf {r} _{i}\times \left({\boldsymbol {\omega }}\times \left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\right)+{\boldsymbol {\omega }}\times \left(\left({\boldsymbol {\omega }}\times \Delta \mathbf {r} _{i}\right)\times \Delta \mathbf {r} _{i}\right)=0,} obtained from the Jacobi identity for the triple cross product as shown in the proof below: Thus, the resultant torque on the rigid system of particles is given by τ = I C α + ω × I C ω , {\displaystyle {\boldsymbol {\tau }}=\mathbf {I} _{\mathbf {C} }{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times \mathbf {I} _{\mathbf {C} }{\boldsymbol {\omega }},} where I C {\displaystyle \mathbf {I_{C}} } is the inertia matrix relative to the center of mass. === Parallel axis theorem === The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass C {\displaystyle \mathbf {C} } and the inertia matrix relative to another point R {\displaystyle \mathbf {R} } . This relationship is called the parallel axis theorem. Consider the inertia matrix I R {\displaystyle \mathbf {I_{R}} } obtained for a rigid system of particles measured relative to a reference point R {\displaystyle \mathbf {R} } , given by I R = − ∑ i = 1 n m i [ r i − R ] 2 . {\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}\left[\mathbf {r} _{i}-\mathbf {R} \right]^{2}.} Let C {\displaystyle \mathbf {C} } be the center of mass of the rigid system, then R = ( R − C ) + C = d + C , {\displaystyle \mathbf {R} =(\mathbf {R} -\mathbf {C} )+\mathbf {C} =\mathbf {d} +\mathbf {C} ,} where d {\displaystyle \mathbf {d} } is the vector from the center of mass C {\displaystyle \mathbf {C} } to the reference point R {\displaystyle \mathbf {R} } . Use this equation to compute the inertia matrix, I R = − ∑ i = 1 n m i [ r i − ( C + d ) ] 2 = − ∑ i = 1 n m i [ ( r i − C ) − d ] 2 . {\displaystyle \mathbf {I} _{\mathbf {R} }=-\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\left(\mathbf {C} +\mathbf {d} \right)]^{2}=-\sum _{i=1}^{n}m_{i}[\left(\mathbf {r} _{i}-\mathbf {C} \right)-\mathbf {d} ]^{2}.} Distribute over the cross product to obtain I R = − ( ∑ i = 1 n m i [ r i − C ] 2 ) + ( ∑ i = 1 n m i [ r i − C ] ) [ d ] + [ d ] ( ∑ i = 1 n m i [ r i − C ] ) − ( ∑ i = 1 n m i ) [ d ] 2 . {\displaystyle \mathbf {I} _{\mathbf {R} }=-\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]^{2}\right)+\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)[\mathbf {d} ]+[\mathbf {d} ]\left(\sum _{i=1}^{n}m_{i}[\mathbf {r} _{i}-\mathbf {C} ]\right)-\left(\sum _{i=1}^{n}m_{i}\right)[\mathbf {d} ]^{2}.} The first term is the inertia matrix I C {\displaystyle \mathbf {I_{C}} } relative to the center of mass. The second and third terms are zero by definition of the center of mass C {\displaystyle \mathbf {C} } . And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix [ d ] {\displaystyle [\mathbf {d} ]} constructed from d {\displaystyle \mathbf {d} } . The result is the parallel axis theorem, I R = I C − M [ d ] 2 , {\displaystyle \mathbf {I} _{\mathbf {R} }=\mathbf {I} _{\mathbf {C} }-M[\mathbf {d} ]^{2},} where d {\displaystyle \mathbf {d} } is the vector from the center of mass C {\displaystyle \mathbf {C} } to the reference point R {\displaystyle \mathbf {R} } . Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form − m [ r ] 2 {\displaystyle -m\left[\mathbf {r} \right]^{2}} , which is similar to the m r 2 {\displaystyle mr^{2}} that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term m [ r ] T [ r ] {\displaystyle m\left[\mathbf {r} \right]^{\mathsf {T}}\left[\mathbf {r} \right]} , if desired, by using the skew-symmetry property of [ r ] {\displaystyle [\mathbf {r} ]} . === Scalar moment of inertia in a plane === The scalar moment of inertia, I L {\displaystyle I_{L}} , of a body about a specified axis whose direction is specified by the unit vector k ^ {\displaystyle \mathbf {\hat {k}} } and passes through the body at a point R {\displaystyle \mathbf {R} } is as follows: I L = k ^ ⋅ ( − ∑ i = 1 N m i [ Δ r i ] 2 ) k ^ = k ^ ⋅ I R k ^ = k ^ T I R k ^ , {\displaystyle I_{L}=\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,} where I R {\displaystyle \mathbf {I_{R}} } is the moment of inertia matrix of the system relative to the reference point R {\displaystyle \mathbf {R} } , and [ Δ r i ] {\displaystyle [\Delta \mathbf {r} _{i}]} is the skew symmetric matrix obtained from the vector Δ r i = r i − R {\displaystyle \Delta \mathbf {r} _{i}=\mathbf {r} _{i}-\mathbf {R} } . This is derived as follows. Let a rigid assembly of n {\displaystyle n} particles, P i , i = 1 , … , n {\displaystyle P_{i},i=1,\dots ,n} , have coordinates r i {\displaystyle \mathbf {r} _{i}} . Choose R {\displaystyle \mathbf {R} } as a reference point and compute the moment of inertia around a line L defined by the unit vector k ^ {\displaystyle \mathbf {\hat {k}} } through the reference point R {\displaystyle \mathbf {R} } , L ( t ) = R + t k ^ {\displaystyle \mathbf {L} (t)=\mathbf {R} +t\mathbf {\hat {k}} } . The perpendicular vector from this line to the particle P i {\displaystyle P_{i}} is obtained from Δ r i {\displaystyle \Delta \mathbf {r} _{i}} by removing the component that projects onto k ^ {\displaystyle \mathbf {\hat {k}} } . Δ r i ⊥ = Δ r i − ( k ^ ⋅ Δ r i ) k ^ = ( E − k ^ k ^ T ) Δ r i , {\displaystyle \Delta \mathbf {r} _{i}^{\perp }=\Delta \mathbf {r} _{i}-\left(\mathbf {\hat {k}} \cdot \Delta \mathbf {r} _{i}\right)\mathbf {\hat {k}} =\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)\Delta \mathbf {r} _{i},} where E {\displaystyle \mathbf {E} } is the identity matrix, so as to avoid confusion with the inertia matrix, and k ^ k ^ T {\displaystyle \mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}} is the outer product matrix formed from the unit vector k ^ {\displaystyle \mathbf {\hat {k}} } along the line L {\displaystyle L} . To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix [ k ^ ] {\displaystyle \left[\mathbf {\hat {k}} \right]} such that [ k ^ ] y = k ^ × y {\displaystyle \left[\mathbf {\hat {k}} \right]\mathbf {y} =\mathbf {\hat {k}} \times \mathbf {y} } , then we have the identity − [ k ^ ] 2 ≡ | k ^ | 2 ( E − k ^ k ^ T ) = E − k ^ k ^ T , {\displaystyle -\left[\mathbf {\hat {k}} \right]^{2}\equiv \left|\mathbf {\hat {k}} \right|^{2}\left(\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}}\right)=\mathbf {E} -\mathbf {\hat {k}} \mathbf {\hat {k}} ^{\mathsf {T}},} noting that k ^ {\displaystyle \mathbf {\hat {k}} } is a unit vector. The magnitude squared of the perpendicular vector is | Δ r i ⊥ | 2 = ( − [ k ^ ] 2 Δ r i ) ⋅ ( − [ k ^ ] 2 Δ r i ) = ( k ^ × ( k ^ × Δ r i ) ) ⋅ ( k ^ × ( k ^ × Δ r i ) ) {\displaystyle {\begin{aligned}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}&=\left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\cdot \left(-\left[\mathbf {\hat {k}} \right]^{2}\Delta \mathbf {r} _{i}\right)\\&=\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\end{aligned}}} The simplification of this equation uses the triple scalar product identity ( k ^ × ( k ^ × Δ r i ) ) ⋅ ( k ^ × ( k ^ × Δ r i ) ) ≡ ( ( k ^ × ( k ^ × Δ r i ) ) × k ^ ) ⋅ ( k ^ × Δ r i ) , {\displaystyle \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\equiv \left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right),} where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that Δ r i {\displaystyle \Delta \mathbf {r} _{i}} and k ^ {\displaystyle \mathbf {\hat {k}} } are orthogonal: ( k ^ × ( k ^ × Δ r i ) ) ⋅ ( k ^ × ( k ^ × Δ r i ) ) = ( ( k ^ × ( k ^ × Δ r i ) ) × k ^ ) ⋅ ( k ^ × Δ r i ) = ( k ^ × Δ r i ) ⋅ ( − Δ r i × k ^ ) = − k ^ ⋅ ( Δ r i × Δ r i × k ^ ) = − k ^ ⋅ [ Δ r i ] 2 k ^ . {\displaystyle {\begin{aligned}&\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\cdot \left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\\={}&\left(\left(\mathbf {\hat {k}} \times \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\right)\times \mathbf {\hat {k}} \right)\cdot \left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\\={}&\left(\mathbf {\hat {k}} \times \Delta \mathbf {r} _{i}\right)\cdot \left(-\Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left(\Delta \mathbf {r} _{i}\times \Delta \mathbf {r} _{i}\times \mathbf {\hat {k}} \right)\\={}&-\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} .\end{aligned}}} Thus, the moment of inertia around the line L {\displaystyle L} through R {\displaystyle \mathbf {R} } in the direction k ^ {\displaystyle \mathbf {\hat {k}} } is obtained from the calculation I L = ∑ i = 1 N m i | Δ r i ⊥ | 2 = − ∑ i = 1 N m i k ^ ⋅ [ Δ r i ] 2 k ^ = k ^ ⋅ ( − ∑ i = 1 N m i [ Δ r i ] 2 ) k ^ = k ^ ⋅ I R k ^ = k ^ T I R k ^ , {\displaystyle {\begin{aligned}I_{L}&=\sum _{i=1}^{N}m_{i}\left|\Delta \mathbf {r} _{i}^{\perp }\right|^{2}\\&=-\sum _{i=1}^{N}m_{i}\mathbf {\hat {k}} \cdot \left[\Delta \mathbf {r} _{i}\right]^{2}\mathbf {\hat {k}} =\mathbf {\hat {k}} \cdot \left(-\sum _{i=1}^{N}m_{i}\left[\Delta \mathbf {r} _{i}\right]^{2}\right)\mathbf {\hat {k}} \\&=\mathbf {\hat {k}} \cdot \mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} =\mathbf {\hat {k}} ^{\mathsf {T}}\mathbf {I} _{\mathbf {R} }\mathbf {\hat {k}} ,\end{aligned}}} where I R {\displaystyle \mathbf {I_{R}} } is the moment of inertia matrix of the system relative to the reference point R {\displaystyle \mathbf {R} } . This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body. == Inertia tensor == For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used. === Definition === For a rigid object of N {\displaystyle N} point masses m k {\displaystyle m_{k}} , the moment of inertia tensor is given by I = [ I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 ] . {\displaystyle \mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\I_{21}&I_{22}&I_{23}\\I_{31}&I_{32}&I_{33}\end{bmatrix}}.} Its components are defined as I i j = d e f ∑ k = 1 N m k ( ‖ r k ‖ 2 δ i j − x i ( k ) x j ( k ) ) {\displaystyle I_{ij}\ {\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(\left\|\mathbf {r} _{k}\right\|^{2}\delta _{ij}-x_{i}^{(k)}x_{j}^{(k)}\right)} where i {\displaystyle i} , j {\displaystyle j} is equal to 1, 2 or 3 for x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} , respectively, r k = ( x 1 ( k ) , x 2 ( k ) , x 3 ( k ) ) {\displaystyle \mathbf {r} _{k}=\left(x_{1}^{(k)},x_{2}^{(k)},x_{3}^{(k)}\right)} is the vector to the point mass m k {\displaystyle m_{k}} from the point about which the tensor is calculated and δ i j {\displaystyle \delta _{ij}} is the Kronecker delta. Note that, by the definition, I {\displaystyle \mathbf {I} } is a symmetric tensor. The diagonal elements are more succinctly written as I x x = d e f ∑ k = 1 N m k ( y k 2 + z k 2 ) , I y y = d e f ∑ k = 1 N m k ( x k 2 + z k 2 ) , I z z = d e f ∑ k = 1 N m k ( x k 2 + y k 2 ) , {\displaystyle {\begin{aligned}I_{xx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right),\\I_{yy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+z_{k}^{2}\right),\\I_{zz}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}\left(x_{k}^{2}+y_{k}^{2}\right),\end{aligned}}} while the off-diagonal elements, also called the products of inertia, are I x y = I y x = d e f − ∑ k = 1 N m k x k y k , I x z = I z x = d e f − ∑ k = 1 N m k x k z k , I y z = I z y = d e f − ∑ k = 1 N m k y k z k . {\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ -\sum _{k=1}^{N}m_{k}y_{k}z_{k}.\end{aligned}}} Here I x x {\displaystyle I_{xx}} denotes the moment of inertia around the x {\displaystyle x} -axis when the objects are rotated around the x-axis, I x y {\displaystyle I_{xy}} denotes the moment of inertia around the y {\displaystyle y} -axis when the objects are rotated around the x {\displaystyle x} -axis, and so on. These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has I = ∭ V ρ ( x , y , z ) ( ‖ r ‖ 2 E 3 − r ⊗ r ) d x d y d z , {\displaystyle \mathbf {I} =\iiint _{V}\rho (x,y,z)\left(\|\mathbf {r} \|^{2}\mathbf {E} _{3}-\mathbf {r} \otimes \mathbf {r} \right)\,dx\,dy\,dz,} where r ⊗ r {\displaystyle \mathbf {r} \otimes \mathbf {r} } is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object. Alternatively it can also be written in terms of the angular momentum operator [ r ] x = r × x {\displaystyle [\mathbf {r} ]\mathbf {x} =\mathbf {r} \times \mathbf {x} } : I = ∭ V ρ ( r ) [ r ] T [ r ] d V = − ∭ Q ρ ( r ) [ r ] 2 d V {\displaystyle \mathbf {I} =\iiint _{V}\rho (\mathbf {r} )[\mathbf {r} ]^{\textsf {T}}[\mathbf {r} ]\,dV=-\iiint _{Q}\rho (\mathbf {r} )[\mathbf {r} ]^{2}\,dV} The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction n {\displaystyle \mathbf {n} } , I n = n ⋅ I ⋅ n , {\displaystyle I_{n}=\mathbf {n} \cdot \mathbf {I} \cdot \mathbf {n} ,} where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as I 12 {\displaystyle I_{12}} is obtained by the computation I 12 = e 1 ⋅ I ⋅ e 2 , {\displaystyle I_{12}=\mathbf {e} _{1}\cdot \mathbf {I} \cdot \mathbf {e} _{2},} and can be interpreted as the moment of inertia around the x {\displaystyle x} -axis when the object rotates around the y {\displaystyle y} -axis. The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by, I = [ I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 ] = [ I x x I x y I x z I y x I y y I y z I z x I z y I z z ] = ∑ k = 1 N [ m k ( y k 2 + z k 2 ) − m k x k y k − m k x k z k − m k x k y k m k ( x k 2 + z k 2 ) − m k y k z k − m k x k z k − m k y k z k m k ( x k 2 + y k 2 ) ] . {\displaystyle {\begin{aligned}\mathbf {I} &={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}={\begin{bmatrix}I_{xx}&I_{xy}&I_{xz}\\[1.8ex]I_{yx}&I_{yy}&I_{yz}\\[1.8ex]I_{zx}&I_{zy}&I_{zz}\end{bmatrix}}\\[2ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}} It is common in rigid body mechanics to use notation that explicitly identifies the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} -axes, such as I x x {\displaystyle I_{xx}} and I x y {\displaystyle I_{xy}} , for the components of the inertia tensor. === Alternate inertia convention === There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix: I x y = I y x = d e f ∑ k = 1 N m k x k y k , I x z = I z x = d e f ∑ k = 1 N m k x k z k , I y z = I z y = d e f ∑ k = 1 N m k y k z k , I = [ I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 ] = [ I x x − I x y − I x z − I y x I y y − I y z − I z x − I z y I z z ] = ∑ k = 1 N [ m k ( y k 2 + z k 2 ) − m k x k y k − m k x k z k − m k x k y k m k ( x k 2 + z k 2 ) − m k y k z k − m k x k z k − m k y k z k m k ( x k 2 + y k 2 ) ] . {\displaystyle {\begin{aligned}I_{xy}=I_{yx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}y_{k},\\I_{xz}=I_{zx}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}x_{k}z_{k},\\I_{yz}=I_{zy}\ &{\stackrel {\mathrm {def} }{=}}\ \sum _{k=1}^{N}m_{k}y_{k}z_{k},\\[3pt]\mathbf {I} ={\begin{bmatrix}I_{11}&I_{12}&I_{13}\\[1.8ex]I_{21}&I_{22}&I_{23}\\[1.8ex]I_{31}&I_{32}&I_{33}\end{bmatrix}}&={\begin{bmatrix}I_{xx}&-I_{xy}&-I_{xz}\\[1.8ex]-I_{yx}&I_{yy}&-I_{yz}\\[1.8ex]-I_{zx}&-I_{zy}&I_{zz}\end{bmatrix}}\\[1ex]&=\sum _{k=1}^{N}{\begin{bmatrix}m_{k}\left(y_{k}^{2}+z_{k}^{2}\right)&-m_{k}x_{k}y_{k}&-m_{k}x_{k}z_{k}\\[1ex]-m_{k}x_{k}y_{k}&m_{k}\left(x_{k}^{2}+z_{k}^{2}\right)&-m_{k}y_{k}z_{k}\\[1ex]-m_{k}x_{k}z_{k}&-m_{k}y_{k}z_{k}&m_{k}\left(x_{k}^{2}+y_{k}^{2}\right)\end{bmatrix}}.\end{aligned}}} ==== Determine inertia convention (principal axes method) ==== If one has the inertia data ( I x x , I y y , I z z , I x y , I x z , I y z ) {\displaystyle (I_{xx},I_{yy},I_{zz},I_{xy},I_{xz},I_{yz})} without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions: The standard inertia convention has been used ( I 12 = I x y , I 13 = I x z , I 23 = I y z ) {\displaystyle (I_{12}=I_{xy},I_{13}=I_{xz},I_{23}=I_{yz})} . The alternate inertia convention has been used ( I 12 = − I x y , I 13 = − I x z , I 23 = − I y z ) {\displaystyle (I_{12}=-I_{xy},I_{13}=-I_{xz},I_{23}=-I_{yz})} . Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used. === Derivation of the tensor components === The distance r {\displaystyle r} of a particle at x {\displaystyle \mathbf {x} } from the axis of rotation passing through the origin in the n ^ {\displaystyle \mathbf {\hat {n}} } direction is | x − ( x ⋅ n ^ ) n ^ | {\displaystyle \left|\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right|} , where n ^ {\displaystyle \mathbf {\hat {n}} } is unit vector. The moment of inertia on the axis is I = m r 2 = m ( x − ( x ⋅ n ^ ) n ^ ) ⋅ ( x − ( x ⋅ n ^ ) n ^ ) = m ( x 2 − 2 x ( x ⋅ n ^ ) n ^ + ( x ⋅ n ^ ) 2 n ^ 2 ) = m ( x 2 − ( x ⋅ n ^ ) 2 ) . {\displaystyle I=mr^{2}=m\left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)\cdot \left(\mathbf {x} -\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} \right)=m\left(\mathbf {x} ^{2}-2\mathbf {x} \left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)\mathbf {\hat {n}} +\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\mathbf {\hat {n}} ^{2}\right)=m\left(\mathbf {x} ^{2}-\left(\mathbf {x} \cdot \mathbf {\hat {n}} \right)^{2}\right).} Rewrite the equation using matrix transpose: I = m ( x T x − n ^ T x x T n ^ ) = m ⋅ n ^ T ( x T x ⋅ E 3 − x x T ) n ^ , {\displaystyle I=m\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} -\mathbf {\hat {n}} ^{\textsf {T}}\mathbf {x} \mathbf {x} ^{\textsf {T}}\mathbf {\hat {n}} \right)=m\cdot \mathbf {\hat {n}} ^{\textsf {T}}\left(\mathbf {x} ^{\textsf {T}}\mathbf {x} \cdot \mathbf {E_{3}} -\mathbf {x} \mathbf {x} ^{\textsf {T}}\right)\mathbf {\hat {n}} ,} where E3 is the 3×3 identity matrix. This leads to a tensor formula for the moment of inertia I = m [ n 1 n 2 n 3 ] [ y 2 + z 2 − x y − x z − y x x 2 + z 2 − y z − z x − z y x 2 + y 2 ] [ n 1 n 2 n 3 ] . {\displaystyle I=m{\begin{bmatrix}n_{1}&n_{2}&n_{3}\end{bmatrix}}{\begin{bmatrix}y^{2}+z^{2}&-xy&-xz\\[0.5ex]-yx&x^{2}+z^{2}&-yz\\[0.5ex]-zx&-zy&x^{2}+y^{2}\end{bmatrix}}{\begin{bmatrix}n_{1}\\[0.7ex]n_{2}\\[0.7ex]n_{3}\end{bmatrix}}.} For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct. === Inertia tensor of translation === Let I 0 {\displaystyle \mathbf {I} _{0}} be the inertia tensor of a body calculated at its center of mass, and R {\displaystyle \mathbf {R} } be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by: I = I 0 + m [ ( R ⋅ R ) E 3 − R ⊗ R ] {\displaystyle \mathbf {I} =\mathbf {I} _{0}+m[(\mathbf {R} \cdot \mathbf {R} )\mathbf {E} _{3}-\mathbf {R} \otimes \mathbf {R} ]} where m {\displaystyle m} is the body's mass, E3 is the 3 × 3 identity matrix, and ⊗ {\displaystyle \otimes } is the outer product. === Inertia tensor of rotation === Let R {\displaystyle \mathbf {R} } be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by: I = R I 0 R T {\displaystyle \mathbf {I} =\mathbf {R} \mathbf {I_{0}} \mathbf {R} ^{\textsf {T}}} == Inertia matrix in different reference frames == The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant. === Body frame === Let the body frame inertia matrix relative to the center of mass be denoted I C B {\displaystyle \mathbf {I} _{\mathbf {C} }^{B}} , and define the orientation of the body frame relative to the inertial frame by the rotation matrix A {\displaystyle \mathbf {A} } , such that, x = A y , {\displaystyle \mathbf {x} =\mathbf {A} \mathbf {y} ,} where vectors y {\displaystyle \mathbf {y} } in the body fixed coordinate frame have coordinates x {\displaystyle \mathbf {x} } in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by I C = A I C B A T . {\displaystyle \mathbf {I} _{\mathbf {C} }=\mathbf {A} \mathbf {I} _{\mathbf {C} }^{B}\mathbf {A} ^{\mathsf {T}}.} Notice that A {\displaystyle \mathbf {A} } changes as the body moves, while I C B {\displaystyle \mathbf {I} _{\mathbf {C} }^{B}} remains constant. === Principal axes === Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix Q {\displaystyle \mathbf {Q} } and a diagonal matrix Λ {\displaystyle {\boldsymbol {\Lambda }}} , given by I C B = Q Λ Q T , {\displaystyle \mathbf {I} _{\mathbf {C} }^{B}=\mathbf {Q} {\boldsymbol {\Lambda }}\mathbf {Q} ^{\mathsf {T}},} where Λ = [ I 1 0 0 0 I 2 0 0 0 I 3 ] . {\displaystyle {\boldsymbol {\Lambda }}={\begin{bmatrix}I_{1}&0&0\\0&I_{2}&0\\0&0&I_{3}\end{bmatrix}}.} The columns of the rotation matrix Q {\displaystyle \mathbf {Q} } define the directions of the principal axes of the body, and the constants I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} , and I 3 {\displaystyle I_{3}} are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. When the body has an axis of symmetry (sometimes called the figure axis or axis of figure) then the other two moments of inertia will be identical and any axis perpendicular to the axis of symmetry will be a principal axis. A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis. The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order m {\displaystyle m} , meaning it is symmetrical under rotations of 360°/m about the given axis, that axis is a principal axis. When m > 2 {\displaystyle m>2} , the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid. The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis. A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble. Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type. === Ellipsoid === The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let Λ {\displaystyle {\boldsymbol {\Lambda }}} be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface x T Λ x = 1 , {\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =1,} or I 1 x 2 + I 2 y 2 + I 3 z 2 = 1 , {\displaystyle I_{1}x^{2}+I_{2}y^{2}+I_{3}z^{2}=1,} defines an ellipsoid in the body frame. Write this equation in the form, ( x 1 / I 1 ) 2 + ( y 1 / I 2 ) 2 + ( z 1 / I 3 ) 2 = 1 , {\displaystyle \left({\frac {x}{1/{\sqrt {I_{1}}}}}\right)^{2}+\left({\frac {y}{1/{\sqrt {I_{2}}}}}\right)^{2}+\left({\frac {z}{1/{\sqrt {I_{3}}}}}\right)^{2}=1,} to see that the semi-principal diameters of this ellipsoid are given by a = 1 I 1 , b = 1 I 2 , c = 1 I 3 . {\displaystyle a={\frac {1}{\sqrt {I_{1}}}},\quad b={\frac {1}{\sqrt {I_{2}}}},\quad c={\frac {1}{\sqrt {I_{3}}}}.} Let a point x {\displaystyle \mathbf {x} } on this ellipsoid be defined in terms of its magnitude and direction, x = ‖ x ‖ n {\displaystyle \mathbf {x} =\|\mathbf {x} \|\mathbf {n} } , where n {\displaystyle \mathbf {n} } is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia I n {\displaystyle I_{\mathbf {n} }} around an axis in the direction n {\displaystyle \mathbf {n} } , yields x T Λ x = ‖ x ‖ 2 n T Λ n = ‖ x ‖ 2 I n = 1. {\displaystyle \mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {x} =\|\mathbf {x} \|^{2}\mathbf {n} ^{\mathsf {T}}{\boldsymbol {\Lambda }}\mathbf {n} =\|\mathbf {x} \|^{2}I_{\mathbf {n} }=1.} Thus, the magnitude of a point x {\displaystyle \mathbf {x} } in the direction n {\displaystyle \mathbf {n} } on the inertia ellipsoid is ‖ x ‖ = 1 I n . {\displaystyle \|\mathbf {x} \|={\frac {1}{\sqrt {I_{\mathbf {n} }}}}.} == See also == Central moment List of moments of inertia Moment of inertia factor Planar lamina Rotational energy == References == == External links == Angular momentum and rigid-body rotation in two and three dimensions Lecture notes on rigid-body rotation and moments of inertia The moment of inertia tensor An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation) Tutorial on finding moments of inertia, with problems and solutions on various basic shapes Notes on mechanics of manipulation: the angular inertia tensor Easy to use and Free Moment of Inertia Calculator online
Wikipedia/Moment_of_inertia_tensor
The theory of impetus, developed in the Middle Ages, attempts to explain the forced motion of a body, what it is, and how it comes about or ceases. It is important to note that in ancient and medieval times, motion was always considered absolute, relative to the Earth as the center of the universe. The theory of impetus is an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. Aristotelian dynamics of forced (in antiquity called “unnatural”) motion states that a body (without a moving soul) only moves when an external force is constantly driving it. The greater the force acting, the proportionally greater the speed of the body. If the force stops acting, the body immediately returns to the natural state of rest. As we know today, this idea is wrong. It also states—as clearly formulated by John of Jadun in his work Quaestiones super 8 libros Physicorum Aristotelis from 1586—that not only motion but also force is transmitted to the medium, such that this force propagates continuously from layer to layer of air, becoming weaker and weaker until it finally dies out. This is how the body finally comes to rest. Although the medieval philosophers, beginning with John Philoponus, held to the intuitive idea that only a direct application of force could cause and maintain motion, they recognized that Aristotle's explanation of unnatural motion could not be correct. They therefore developed the concept of impetus. Impetus was understood to be a force inherent in a moving body that had previously been transferred to it by an external force during a previous direct contact. The explanation of modern mechanics is completely different. First of all, motion is not absolute but relative, namely relative to a reference frame (observer), which in turn can move itself relative to another reference frame. For example, the speed of a bird flying relative to the earth is completely different than if you look at it from a moving car. Second, the observed speed of a body that is not subject to an external force never changes, regardless of who is observing it. The permanent state of a body is therefore uniform motion. Its continuity requires no external or internal force, but is based solely on the inertia of the body. If a force acts on a moving or stationary body, this leads to a change in the observed speed. The state of rest is merely a limiting case of motion. The term “impetus” as a force that maintains motion therefore has no equivalence in modern mechanics. At most, it comes close to the modern term “linear momentum” of a mass. This is because it is linear momentum as the product of mass and velocity that maintains motion due to the inertia of the mass (conservation of linear momentum). But momentum is not a force; rather, a force is the cause of a change in the momentum of a body, and vice versa. After impetus was introduced by John Philoponus in the 6th century, and elaborated by Nur ad-Din al-Bitruji at the end of the 12th century. The theory was modified by Avicenna in the 11th century and Abu'l-Barakāt al-Baghdādī in the 12th century, before it was later established in Western scientific thought by Jean Buridan in the 14th century. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics. == Aristotelian theory == Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion, quantitative change, qualitative change, and substantial change. Aristotle describes two kinds of motion: "violent" or "unnatural motion", such as that of a thrown stone, in Physics (254b10), and "natural motion", such as of a falling object, in On the Heavens (300a20). In violent motion, as soon as the agent stops causing it, the motion stops also: in other words, the natural state of an object is to be at rest, since Aristotle does not address friction. == Hipparchus' theory == In the 2nd century, Hipparchus assumed that the throwing force is transferred to the body at the time of the throw, and that the body dissipates it during the subsequent up-and-down motion of free fall. This is according to the Neoplatonist Simplicius of Cilicia, who quotes Hipparchus in his book Aristotelis De Caelo commentaria 264, 25 as follows: "Hipparchus says in his book On Bodies Carried Down by Their Weight that the throwing force is the cause of the upward motion of [a lump of] earth thrown upward as long as this force is stronger than that of the thrown body; the stronger the throwing force, the faster the upward motion. Then, when the force decreases, the upward motion continues at a decreased speed until the body begins to move downward under the influence of its own weight, while the throwing force still continues in some way. As this decreases, the velocity of the fall increases and reaches its highest value when this force is completely dissipated." Thus, Hipparchus does not speak of a continuous contact between the moving force and the moving body, or of the function of air as an intermediate carrier of motion, as Aristotle claims. == Philoponan theory == In the 6th century, John Philoponus partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force," but modified it to include his idea that the hurled body acquires a motive power or inclination for forced movement from the agent producing the initial motion and that this power secures the continuation of such motion. However, he argued that this impressed virtue was temporary: that it was a self-expending inclination, and thus the violent motion produced comes to an end, changing back into natural motion. In his book On Aristotle Physics 641, 12; 641, 29; 642, 9 Philoponus first argues explicitly against Aristotle's explanation that a thrown stone, after leaving the hand, cannot be propelled any further by the air behind it. Then he continues: "Instead, some immaterial kinetic force must be imparted to the projectile by the thrower. Whereby the pushed air contributes either nothing or only very little to this motion. But if moving bodies are necessarily moved in this way, it is clear that the same process will take place much more easily if an arrow or a stone is thrown necessarily and against its tendency into empty space, and that nothing is necessary for this except the thrower." This last sentence is intended to show that in empty space—which Aristotle rejects—and contrary to Aristotle's opinion, a moving body would continue to move. It should be pointed out that Philoponus in his book uses two different expressions for impetus: kinetic capacity (dynamis) and kinetic force (energeia). Both expressions designate in his theory a concept, which is close to the today's concept of energy, but they are far away from the Aristotelian conceptions of potentiality and actuality. Philoponus' theory of imparted force cannot yet be understood as a principle of inertia. For while he rightly says that the driving quality is no longer imparted externally but has become an internal property of the body, he still accepts the Aristotelian assertion that the driving quality is a force (power) that now acts internally and to which velocity is proportional. In modern physics since Newton, however, velocity is a quality that persists in the absence of forces. == Ockham's and Marchia's theory == The first one to grasp this persistent motion by itself was William of Ockham. In his Commentary on the Sentences, Book 2, Question 26, M, written in 1318, he first argues: "If someone standing at point C were to fire a projectile aimed at point B, while another person standing at point F were to throw a projectile at point C, so that at some point M the two projectiles would meet, it would be necessary, according to the Aristotelian explanation, for the same portion of air at point M to be moved simultaneously in two different directions." The impossibility of this, according to Ockham, invalidates the Aristotelian explanation of the movement of projectiles. So Ockham goes on to say: "I say therefore that that which moves (ipsum movens) ... after the separation of the moving body from the original projector, is the body moved by itself (ipsum motum secundum se) and not by any power in it or relative to it (virtus absoluta in eo vel respectiva), ... ." It has been claimed by some historians that by rejecting the basic Aristotelian principle "Everything that moves is moved by something else." (Omne quod moventur ab alio movetur.), Ockham took the first step toward the principle of inertia. Around 1320, Francis de Marchia developed a detailed and elaborate theory of his virtus derelicta. Marchia described virtus derelicta as force impressed on a projectile that gradually passes away and is consumed by the movement it generates. It is a form that is "not simply permanent, nor simply fluent, but almost medial", staying for some time in the body, but then fading away. This is different from Buridan's impetus (see below), which is a permanent state (res permanens) that is only diminished or destroyed by an opposing force—the resistance of the medium or the gravity of the projectile, which tends in a direction opposite to its motion. Buridan rightly says that without these opposing forces, the projectile would continue to move at constant speed forever. == Iranian theories == In the 11th century, Avicenna (Ibn Sīnā) discussed Philoponus' theory in The Book of Healing, in Physics IV.14 he says: When we independently verify the issue (of projectile motion), we find the most correct doctrine is the doctrine of those who think that the moved object acquires an inclination from the mover Ibn Sīnā agreed that an impetus is imparted to a projectile by the thrower, but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. Therefore, he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, which is consistent with Newton's concept of inertia. This idea (which dissented from the Aristotelian view) was later described as "impetus" by Jean Buridan, who may have been influenced by Ibn Sina. == Arabic theories == In the 12th century, Hibat Allah Abu'l-Barakat al-Baghdaadi adopted Philoponus' theory of impetus. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. Like Philoponus, and unlike Ibn Sina, al-Baghdaadi believed that the mayl self-extinguishes itself. He also proposed an explanation of the acceleration of falling bodies where "one mayl after another" is successively applied, because it is the falling body itself which provides the mayl, as opposed to shooting a bow, where only one violent mayl is applied. According to Shlomo Pines, al-Baghdaadi's theory was the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]. Jean Buridan and Albert of Saxony later refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. == Buridanist impetus == In the 14th century, Jean Buridan postulated the notion of motive force, which he named impetus. When a mover sets a body in motion he implants into it a certain impetus, that is, a certain force enabling a body to move in the direction in which the mover starts it, be it upwards, downwards, sidewards, or in a circle. The implanted impetus increases in the same ratio as the velocity. It is because of this impetus that a stone moves on after the thrower has ceased moving it. But because of the resistance of the air (and also because of the gravity of the stone) which strives to move it in the opposite direction to the motion caused by the impetus, the latter will weaken all the time. Therefore the motion of the stone will be gradually slower, and finally the impetus is so diminished or destroyed that the gravity of the stone prevails and moves the stone towards its natural place. In my opinion one can accept this explanation because the other explanations prove to be false whereas all phenomena agree with this one. Buridan gives his theory a mathematical value: impetus = weight x velocity. Buridan's pupil Dominicus de Clavasio in his 1357 De Caelo, as follows: When something moves a stone by violence, in addition to imposing on it an actual force, it impresses in it a certain impetus. In the same way gravity not only gives motion itself to a moving body, but also gives it a motive power and an impetus, ... Buridan's position was that a moving object would only be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus was proportional to speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan pointed out that neither Aristotle's unmoved movers nor Plato's souls are in the Bible, so he applied impetus theory to the eternal rotation of the celestial spheres by extension of a terrestrial example of its application to rotary motion in the form of a rotating millwheel that continues rotating for a long time after the originally propelling hand is withdrawn, driven by the impetus impressed within it. He wrote on the celestial impetus of the spheres as follows: God, when He created the world, moved each of the celestial orbs as He pleased, and in moving them he impressed in them impetuses which moved them without his having to move them any more...And those impetuses which he impressed in the celestial bodies were not decreased or corrupted afterwards, because there was no inclination of the celestial bodies for other movements. Nor was there resistance which would be corruptive or repressive of that impetus. However, by discounting the possibility of any resistance either due to a contrary inclination to move in any opposite direction or due to any external resistance, he concluded their impetus was therefore not corrupted by any resistance. Buridan also discounted any inherent resistance to motion in the form of an inclination to rest within the spheres themselves, such as the inertia posited by Averroes and Aquinas. For otherwise that resistance would destroy their impetus, as the anti-Duhemian historian of science Annaliese Maier maintained the Parisian impetus dynamicists were forced to conclude because of their belief in an inherent inclinatio ad quietem or inertia in all bodies. This raised the question of why the motive force of impetus does not therefore move the spheres with infinite speed. One impetus dynamics answer seemed to be that it was a secondary kind of motive force that produced uniform motion rather than infinite speed, rather than producing uniformly accelerated motion like the primary force did by producing constantly increasing amounts of impetus. However, in his Treatise on the heavens and the world in which the heavens are moved by inanimate inherent mechanical forces, Buridan's pupil Oresme offered an alternative Thomist inertial response to this problem. His response was to posit a resistance to motion inherent in the heavens (i.e. in the spheres), but which is only a resistance to acceleration beyond their natural speed, rather than to motion itself, and was thus a tendency to preserve their natural speed. Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390), by writers in Poland such as John Cantius, and the Oxford Calculators. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs. == The tunnel experiment and oscillatory motion == The Buridan impetus theory developed one of the most important thought experiments in the history of science, the 'tunnel-experiment'. This experiment incorporated oscillatory and pendulum motion into dynamical analysis and the science of motion for the first time. It also established one of the important principles of classical mechanics. The pendulum was crucially important to the development of mechanics in the 17th century. The tunnel experiment also gave rise to the more generally important axiomatic principle of Galilean, Huygenian and Leibnizian dynamics, namely that a body rises to the same height from which it has fallen, a principle of gravitational potential energy. As Galileo Galilei expressed this fundamental principle of his dynamics in his 1632 Dialogo: The heavy falling body acquires sufficient impetus [in falling from a given height] to carry it back to an equal height. This imaginary experiment predicted that a cannonball dropped down a tunnel going straight through the Earth's centre and out the other side would pass the centre and rise on the opposite surface to the same height from which it had first fallen, driven upwards by the gravitationally created impetus it had continually accumulated in falling to the centre. This impetus would require a violent motion correspondingly rising to the same height past the centre for the now opposing force of gravity to destroy it all in the same distance which it had previously required to create it. At this turning point the ball would then descend again and oscillate back and forth between the two opposing surfaces about the centre infinitely in principle. The tunnel experiment provided the first dynamical model of oscillatory motion, specifically in terms of A-B impetus dynamics. This thought-experiment was then applied to the dynamical explanation of a real world oscillatory motion, namely that of the pendulum. The oscillating motion of the cannonball was compared to the motion of a pendulum bob by imagining it to be attached to the end of an immensely long cord suspended from the vault of the fixed stars centred on the Earth. The relatively short arc of its path through the distant Earth was practically a straight line along the tunnel. Real world pendula were then conceived of as just micro versions of this 'tunnel pendulum', but with far shorter cords and bobs oscillating above the Earth's surface in arcs corresponding to the tunnel as their oscillatory midpoint was dynamically assimilated to the tunnel's centre. Through such 'lateral thinking', its lateral horizontal motion that was conceived of as a case of gravitational free-fall followed by violent motion in a recurring cycle, with the bob repeatedly travelling through and beyond the motion's vertically lowest but horizontally middle point that substituted for the Earth's centre in the tunnel pendulum. The lateral motions of the bob first towards and then away from the normal in the downswing and upswing become lateral downward and upward motions in relation to the horizontal rather than to the vertical. The orthodox Aristotelians saw pendulum motion as a dynamical anomaly, as 'falling to rest with difficulty.' Thomas Kuhn wrote in his 1962 The Structure of Scientific Revolutions on the impetus theory's novel analysis it was not falling with any dynamical difficulty at all in principle, but was rather falling in repeated and potentially endless cycles of alternating downward gravitationally natural motion and upward gravitationally violent motion. Galileo eventually appealed to pendulum motion to demonstrate that the speed of gravitational free-fall is the same for all unequal weights by virtue of dynamically modelling pendulum motion in this manner as a case of cyclically repeated gravitational free-fall along the horizontal in principle. The tunnel experiment was a crucial experiment in favour of impetus dynamics against both orthodox Aristotelian dynamics without any auxiliary impetus theory and Aristotelian dynamics with its H-P variant. According to the latter two theories, the bob cannot possibly pass beyond the normal. In orthodox Aristotelian dynamics there is no force to carry the bob upwards beyond the centre in violent motion against its own gravity that carries it to the centre, where it stops. When conjoined with the Philoponus auxiliary theory, in the case where the cannonball is released from rest, there is no such force because either all the initial upward force of impetus originally impressed within it to hold it in static dynamical equilibrium has been exhausted, or if any remained it would act in the opposite direction and combine with gravity to prevent motion through and beyond the centre. The cannonball being positively hurled downwards could not possibly result in an oscillatory motion either. Although it could then possibly pass beyond the centre, it could never return to pass through it and rise back up again. It would be logically possible for it to pass beyond the centre if upon reaching the centre some of the constantly decaying downward impetus remained and still was sufficiently stronger than gravity to push it beyond the centre and upwards again, eventually becoming weaker than gravity. The ball would then be pulled back towards the centre by its gravity but could not then pass beyond the centre to rise up again, because it would have no force directed against gravity to overcome it. Any possibly remaining impetus would be directed 'downwards' towards the centre, in the same direction it was originally created. Thus pendulum motion was dynamically impossible for both orthodox Aristotelian dynamics and also for H-P impetus dynamics on this 'tunnel model' analogical reasoning. It was predicted by the impetus theory's tunnel prediction because that theory posited that a continually accumulating downwards force of impetus directed towards the centre is acquired in natural motion, sufficient to then carry it upwards beyond the centre against gravity, and rather than only having an initially upwards force of impetus away from the centre as in the theory of natural motion. So the tunnel experiment constituted a crucial experiment between three alternative theories of natural motion. Impetus dynamics was to be preferred if the Aristotelian science of motion was to incorporate a dynamical explanation of pendulum motion. It was also to be preferred more generally if it was to explain other oscillatory motions, such as the to and fro vibrations around the normal of musical strings in tension, such as those of a guitar. The analogy made with the gravitational tunnel experiment was that the tension in the string pulling it towards the normal played the role of gravity, and thus when plucked (i.e. pulled away from the normal) and then released, it was the equivalent of pulling the cannonball to the Earth's surface and then releasing it. Thus the musical string vibrated in a continual cycle of the alternating creation of impetus towards the normal and its destruction after passing through the normal until this process starts again with the creation of fresh 'downward' impetus once all the 'upward' impetus has been destroyed. This positing of a dynamical family resemblance of the motions of pendula and vibrating strings with the paradigmatic tunnel-experiment, the origin of all oscillations in the history of dynamics, was one of the greatest imaginative developments of medieval Aristotelian dynamics in its increasing repertoire of dynamical models of different kinds of motion. Shortly before Galileo's theory of impetus, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: ... [Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path. Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. == See also == Conatus Physics in the medieval Islamic world History of science == References and footnotes == == Bibliography == Clagett, Marshall (1959). Science of Mechanics in the Middle Ages. University of Wisconsin Press. Crombie, Alistair Cameron (1959). The History of Science From Augustine to Galileo. Dover Publications. ISBN 9780486288505. {{cite book}}: ISBN / Date incompatibility (help) Duhem, Pierre. [1906–13]: Etudes sur Leonard de Vinci Duhem, Pierre, History of Physics, Section IX, XVI and XVII in The Catholic Encyclopedia[1] Drake, Stillman; Drabkin, I. E. (1969). Mechanics in Sixteenth Century Italy. University of Wisconsin Press. ISBN 9781101203736. Galilei, Galileo (1590). De Motu. translated in On Motion and on Mechanics. Drabkin & Drake. Galilei, Galileo (1953). Dialogo. Translated by Stillman Drake. University of California Press. Galilei, Galileo (1974). Discorsi. Translated by Stillman Drake. Grant, Edward (1996). The Foundations of Modern Science in the Middle Ages. Cambridge University Press. ISBN 0-521-56137-X. Hentschel, Klaus (2009). "Zur Begriffs- und Problemgeschichte von 'Impetus'". In Yousefi, Hamid Reza; Dick, Christiane (eds.). Das Wagnis des Neuen. Kontexte und Restriktionen der Wissenschaft. Nordhausen: Bautz. pp. 479–499. ISBN 978-3-88309-507-3. Koyré, Alexandre. Galilean Studies. Kuhn, Thomas (1957). The Copernican Revolution. Kuhn, Thomas (1970) [1962]. The Structure of Scientific Revolutions. Moody, E. A. (1966). "Galileo and his precursors". In Golino (ed.). Galileo Reappraised. University of California Press. Moody, E. A. (1951). "Galileo and Avempace: The Dynamics of the Leaning Tower Experiment". Journal of the History of Ideas. 12 (2): 163–193. doi:10.2307/2707514. JSTOR 2707514.
Wikipedia/Impetus_theory
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables. To derive a lower bound for the marginal likelihood (sometimes called the evidence) of the observed data (i.e. the marginal probability of the data given the model, with marginalization performed over unobserved variables). This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data. (See also the Bayes factor article.) In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to Monte Carlo sampling methods—particularly, Markov chain Monte Carlo methods such as Gibbs sampling—for taking a fully Bayesian approach to statistical inference over complex distributions that are difficult to evaluate directly or sample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. Variational Bayes can be seen as an extension of the expectation–maximization (EM) algorithm from maximum likelihood (ML) or maximum a posteriori (MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically. For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables. == Mathematical derivation == === Problem === In variational inference, the posterior distribution over a set of unobserved variables Z = { Z 1 … Z n } {\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}} given some data X {\displaystyle \mathbf {X} } is approximated by a so-called variational distribution, Q ( Z ) : {\displaystyle Q(\mathbf {Z} ):} P ( Z ∣ X ) ≈ Q ( Z ) . {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )\approx Q(\mathbf {Z} ).} The distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is restricted to belong to a family of distributions of simpler form than P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} (e.g. a family of Gaussian distributions), selected with the intention of making Q ( Z ) {\displaystyle Q(\mathbf {Z} )} similar to the true posterior, P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} . The similarity (or dissimilarity) is measured in terms of a dissimilarity function d ( Q ; P ) {\displaystyle d(Q;P)} and hence inference is performed by selecting the distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} that minimizes d ( Q ; P ) {\displaystyle d(Q;P)} . === KL divergence === The most common type of variational Bayes uses the Kullback–Leibler divergence (KL-divergence) of Q from P as the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as D K L ( Q ∥ P ) ≜ ∑ Z Q ( Z ) log ⁡ Q ( Z ) P ( Z ∣ X ) . {\displaystyle D_{\mathrm {KL} }(Q\parallel P)\triangleq \sum _{\mathbf {Z} }Q(\mathbf {Z} )\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} \mid \mathbf {X} )}}.} Note that Q and P are reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to the expectation–maximization algorithm. (Using the KL-divergence in the other way produces the expectation propagation algorithm.) === Intractability === Variational techniques are typically used to form an approximation for: P ( Z ∣ X ) = P ( X ∣ Z ) P ( Z ) P ( X ) = P ( X ∣ Z ) P ( Z ) ∫ Z P ( X , Z ′ ) d Z ′ {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{P(\mathbf {X} )}}={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{\int _{\mathbf {Z} }P(\mathbf {X} ,\mathbf {Z} ')\,d\mathbf {Z} '}}} The marginalization over Z {\displaystyle \mathbf {Z} } to calculate P ( X ) {\displaystyle P(\mathbf {X} )} in the denominator is typically intractable, because, for example, the search space of Z {\displaystyle \mathbf {Z} } is combinatorially large. Therefore, we seek an approximation, using Q ( Z ) ≈ P ( Z ∣ X ) {\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )} . === Evidence lower bound === Given that P ( Z ∣ X ) = P ( X , Z ) P ( X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}} , the KL-divergence above can also be written as D K L ( Q ∥ P ) = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) P ( Z , X ) + log ⁡ P ( X ) ] = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + ∑ Z Q ( Z ) [ log ⁡ P ( X ) ] {\displaystyle {\begin{array}{rl}D_{\mathrm {KL} }(Q\parallel P)&=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} ,\mathbf {X} )}}+\log P(\mathbf {X} )\right]\\&=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log P(\mathbf {X} )\right]\end{array}}} Because P ( X ) {\displaystyle P(\mathbf {X} )} is a constant with respect to Z {\displaystyle \mathbf {Z} } and ∑ Z Q ( Z ) = 1 {\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1} because Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is a distribution, we have D K L ( Q ∥ P ) = ∑ Z Q ( Z ) [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + log ⁡ P ( X ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )} which, according to the definition of expected value (for a discrete random variable), can be written as follows D K L ( Q ∥ P ) = E Q [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] + log ⁡ P ( X ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)=\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )} which can be rearranged to become log ⁡ P ( X ) = D K L ( Q ∥ P ) − E Q [ log ⁡ Q ( Z ) − log ⁡ P ( Z , X ) ] = D K L ( Q ∥ P ) + L ( Q ) {\displaystyle {\begin{array}{rl}\log P(\mathbf {X} )&=D_{\mathrm {KL} }(Q\parallel P)-\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]\\&=D_{\mathrm {KL} }(Q\parallel P)+{\mathcal {L}}(Q)\end{array}}} As the log-evidence log ⁡ P ( X ) {\displaystyle \log P(\mathbf {X} )} is fixed with respect to Q {\displaystyle Q} , maximizing the final term L ( Q ) {\displaystyle {\mathcal {L}}(Q)} minimizes the KL divergence of Q {\displaystyle Q} from P {\displaystyle P} . By appropriate choice of Q {\displaystyle Q} , L ( Q ) {\displaystyle {\mathcal {L}}(Q)} becomes tractable to compute and to maximize. Hence we have both an analytical approximation Q {\displaystyle Q} for the posterior P ( Z ∣ X ) {\displaystyle P(\mathbf {Z} \mid \mathbf {X} )} , and a lower bound L ( Q ) {\displaystyle {\mathcal {L}}(Q)} for the log-evidence log ⁡ P ( X ) {\displaystyle \log P(\mathbf {X} )} (since the KL-divergence is non-negative). The lower bound L ( Q ) {\displaystyle {\mathcal {L}}(Q)} is known as the (negative) variational free energy in analogy with thermodynamic free energy because it can also be expressed as a negative energy E Q ⁡ [ log ⁡ P ( Z , X ) ] {\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]} plus the entropy of Q {\displaystyle Q} . The term L ( Q ) {\displaystyle {\mathcal {L}}(Q)} is also known as Evidence Lower Bound, abbreviated as ELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data. === Proofs === By the generalized Pythagorean theorem of Bregman divergence, of which KL-divergence is a special case, it can be shown that: D K L ( Q ∥ P ) ≥ D K L ( Q ∥ Q ∗ ) + D K L ( Q ∗ ∥ P ) , ∀ Q ∗ ∈ C {\displaystyle D_{\mathrm {KL} }(Q\parallel P)\geq D_{\mathrm {KL} }(Q\parallel Q^{*})+D_{\mathrm {KL} }(Q^{*}\parallel P),\forall Q^{*}\in {\mathcal {C}}} where C {\displaystyle {\mathcal {C}}} is a convex set and the equality holds if: Q = Q ∗ ≜ arg ⁡ min Q ∈ C D K L ( Q ∥ P ) . {\displaystyle Q=Q^{*}\triangleq \arg \min _{Q\in {\mathcal {C}}}D_{\mathrm {KL} }(Q\parallel P).} In this case, the global minimizer Q ∗ ( Z ) = q ∗ ( Z 1 ∣ Z 2 ) q ∗ ( Z 2 ) = q ∗ ( Z 2 ∣ Z 1 ) q ∗ ( Z 1 ) , {\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),} with Z = { Z 1 , Z 2 } , {\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},} can be found as follows: q ∗ ( Z 2 ) = P ( X ) ζ ( X ) P ( Z 2 ∣ X ) exp ⁡ ( D K L ( q ∗ ( Z 1 ∣ Z 2 ) ∥ P ( Z 1 ∣ Z 2 , X ) ) ) = 1 ζ ( X ) exp ⁡ E q ∗ ( Z 1 ∣ Z 2 ) ( log ⁡ P ( Z , X ) q ∗ ( Z 1 ∣ Z 2 ) ) , {\displaystyle {\begin{array}{rl}q^{*}(\mathbf {Z} _{2})&={\frac {P(\mathbf {X} )}{\zeta (\mathbf {X} )}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}\\&={\frac {1}{\zeta (\mathbf {X} )}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right),\end{array}}} in which the normalizing constant is: ζ ( X ) = P ( X ) ∫ Z 2 P ( Z 2 ∣ X ) exp ⁡ ( D K L ( q ∗ ( Z 1 ∣ Z 2 ) ∥ P ( Z 1 ∣ Z 2 , X ) ) ) = ∫ Z 2 exp ⁡ E q ∗ ( Z 1 ∣ Z 2 ) ( log ⁡ P ( Z , X ) q ∗ ( Z 1 ∣ Z 2 ) ) . {\displaystyle {\begin{array}{rl}\zeta (\mathbf {X} )&=P(\mathbf {X} )\int _{\mathbf {Z} _{2}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}\\&=\int _{\mathbf {Z} _{2}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right).\end{array}}} The term ζ ( X ) {\displaystyle \zeta (\mathbf {X} )} is often called the evidence lower bound (ELBO) in practice, since P ( X ) ≥ ζ ( X ) = exp ⁡ ( L ( Q ∗ ) ) {\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))} , as shown above. By interchanging the roles of Z 1 {\displaystyle \mathbf {Z} _{1}} and Z 2 , {\displaystyle \mathbf {Z} _{2},} we can iteratively compute the approximated q ∗ ( Z 1 ) {\displaystyle q^{*}(\mathbf {Z} _{1})} and q ∗ ( Z 2 ) {\displaystyle q^{*}(\mathbf {Z} _{2})} of the true model's marginals P ( Z 1 ∣ X ) {\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} )} and P ( Z 2 ∣ X ) , {\displaystyle P(\mathbf {Z} _{2}\mid \mathbf {X} ),} respectively. Although this iterative scheme is guaranteed to converge monotonically, the converged Q ∗ {\displaystyle Q^{*}} is only a local minimizer of D K L ( Q ∥ P ) {\displaystyle D_{\mathrm {KL} }(Q\parallel P)} . If the constrained space C {\displaystyle {\mathcal {C}}} is confined within independent space, i.e. q ∗ ( Z 1 ∣ Z 2 ) = q ∗ ( Z 1 ) , {\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),} the above iterative scheme will become the so-called mean field approximation Q ∗ ( Z ) = q ∗ ( Z 1 ) q ∗ ( Z 2 ) , {\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),} as shown below. == Mean field approximation == The variational distribution Q ( Z ) {\displaystyle Q(\mathbf {Z} )} is usually assumed to factorize over some partition of the latent variables, i.e. for some partition of the latent variables Z {\displaystyle \mathbf {Z} } into Z 1 … Z M {\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}} , Q ( Z ) = ∏ i = 1 M q i ( Z i ∣ X ) {\displaystyle Q(\mathbf {Z} )=\prod _{i=1}^{M}q_{i}(\mathbf {Z} _{i}\mid \mathbf {X} )} It can be shown using the calculus of variations (hence the name "variational Bayes") that the "best" distribution q j ∗ {\displaystyle q_{j}^{*}} for each of the factors q j {\displaystyle q_{j}} (in terms of the distribution minimizing the KL divergence, as described above) satisfies: q j ∗ ( Z j ∣ X ) = e E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] ∫ e E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] d Z j {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )={\frac {e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}}{\int e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}\,d\mathbf {Z} _{j}}}} where E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] {\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]} is the expectation of the logarithm of the joint probability of the data and latent variables, taken with respect to q ∗ {\displaystyle q^{*}} over all variables not in the partition: refer to Lemma 4.1 of for a derivation of the distribution q j ∗ ( Z j ∣ X ) {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )} . In practice, we usually work in terms of logarithms, i.e.: ln ⁡ q j ∗ ( Z j ∣ X ) = E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] + constant {\displaystyle \ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}} The constant in the above expression is related to the normalizing constant (the denominator in the expression above for q j ∗ {\displaystyle q_{j}^{*}} ) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g. Gaussian, gamma, etc.). Using the properties of expectations, the expression E q − j ∗ ⁡ [ ln ⁡ p ( Z , X ) ] {\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]} can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in Z j {\displaystyle \mathbf {Z} _{j}} ). This creates circular dependencies between the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests an iterative algorithm, much like EM (the expectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed to converge. In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. the means); sometimes expectations of squared variables (which can be related to the variance of the variables), or expectations of higher powers (i.e. higher moments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual, nonlinear dependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer. == A duality formula for variational inference == The following theorem is referred to as a duality formula for variational inference. It explains some important properties of the variational distributions used in variational Bayes methods. Theorem Consider two probability spaces ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} and ( Θ , F , Q ) {\displaystyle (\Theta ,{\mathcal {F}},Q)} with Q ≪ P {\displaystyle Q\ll P} . Assume that there is a common dominating probability measure λ {\displaystyle \lambda } such that P ≪ λ {\displaystyle P\ll \lambda } and Q ≪ λ {\displaystyle Q\ll \lambda } . Let h {\displaystyle h} denote any real-valued random variable on ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} that satisfies h ∈ L 1 ( P ) {\displaystyle h\in L_{1}(P)} . Then the following equality holds log ⁡ E P [ exp ⁡ h ] = sup Q ≪ P { E Q [ h ] − D KL ( Q ∥ P ) } . {\displaystyle \log E_{P}[\exp h]={\text{sup}}_{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds q ( θ ) p ( θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] , {\displaystyle {\frac {q(\theta )}{p(\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measure Q {\displaystyle Q} , where p ( θ ) = d P / d λ {\displaystyle p(\theta )=dP/d\lambda } and q ( θ ) = d Q / d λ {\displaystyle q(\theta )=dQ/d\lambda } denote the Radon–Nikodym derivatives of the probability measures P {\displaystyle P} and Q {\displaystyle Q} with respect to λ {\displaystyle \lambda } , respectively. == A basic example == Consider a simple non-hierarchical Bayesian model consisting of a set of i.i.d. observations from a Gaussian distribution, with unknown mean and variance. In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method. For mathematical convenience, in the following example we work in terms of the precision — i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of the covariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is a one-to-one correspondence between the two.) === The mathematical model === We place conjugate prior distributions on the unknown mean μ {\displaystyle \mu } and precision τ {\displaystyle \tau } , i.e. the mean also follows a Gaussian distribution while the precision follows a gamma distribution. In other words: τ ∼ Gamma ⁡ ( a 0 , b 0 ) μ | τ ∼ N ( μ 0 , ( λ 0 τ ) − 1 ) { x 1 , … , x N } ∼ N ( μ , τ − 1 ) N = number of data points {\displaystyle {\begin{aligned}\tau &\sim \operatorname {Gamma} (a_{0},b_{0})\\\mu |\tau &\sim {\mathcal {N}}(\mu _{0},(\lambda _{0}\tau )^{-1})\\\{x_{1},\dots ,x_{N}\}&\sim {\mathcal {N}}(\mu ,\tau ^{-1})\\N&={\text{number of data points}}\end{aligned}}} The hyperparameters μ 0 , λ 0 , a 0 {\displaystyle \mu _{0},\lambda _{0},a_{0}} and b 0 {\displaystyle b_{0}} in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions of μ {\displaystyle \mu } and τ {\displaystyle \tau } . We are given N {\displaystyle N} data points X = { x 1 , … , x N } {\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}} and our goal is to infer the posterior distribution q ( μ , τ ) = p ( μ , τ ∣ x 1 , … , x N ) {\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})} of the parameters μ {\displaystyle \mu } and τ . {\displaystyle \tau .} === The joint probability === The joint probability of all variables can be rewritten as p ( X , μ , τ ) = p ( X ∣ μ , τ ) p ( μ ∣ τ ) p ( τ ) {\displaystyle p(\mathbf {X} ,\mu ,\tau )=p(\mathbf {X} \mid \mu ,\tau )p(\mu \mid \tau )p(\tau )} where the individual factors are p ( X ∣ μ , τ ) = ∏ n = 1 N N ( x n ∣ μ , τ − 1 ) p ( μ ∣ τ ) = N ( μ ∣ μ 0 , ( λ 0 τ ) − 1 ) p ( τ ) = Gamma ⁡ ( τ ∣ a 0 , b 0 ) {\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{n=1}^{N}{\mathcal {N}}(x_{n}\mid \mu ,\tau ^{-1})\\p(\mu \mid \tau )&={\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\\p(\tau )&=\operatorname {Gamma} (\tau \mid a_{0},b_{0})\end{aligned}}} where N ( x ∣ μ , σ 2 ) = 1 2 π σ 2 e − ( x − μ ) 2 2 σ 2 Gamma ⁡ ( τ ∣ a , b ) = 1 Γ ( a ) b a τ a − 1 e − b τ {\displaystyle {\begin{aligned}{\mathcal {N}}(x\mid \mu ,\sigma ^{2})&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\\\operatorname {Gamma} (\tau \mid a,b)&={\frac {1}{\Gamma (a)}}b^{a}\tau ^{a-1}e^{-b\tau }\end{aligned}}} === Factorized approximation === Assume that q ( μ , τ ) = q ( μ ) q ( τ ) {\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )} , i.e. that the posterior distribution factorizes into independent factors for μ {\displaystyle \mu } and τ {\displaystyle \tau } . This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be a Gaussian-gamma distribution), and hence the result we obtain will be an approximation. === Derivation of q(μ) === Then ln ⁡ q μ ∗ ( μ ) = E τ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) + ln ⁡ p ( μ ∣ τ ) + ln ⁡ p ( τ ) ] + C = E τ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) ] + E τ ⁡ [ ln ⁡ p ( μ ∣ τ ) ] + E τ ⁡ [ ln ⁡ p ( τ ) ] + C = E τ ⁡ [ ln ⁡ ∏ n = 1 N N ( x n ∣ μ , τ − 1 ) ] + E τ ⁡ [ ln ⁡ N ( μ ∣ μ 0 , ( λ 0 τ ) − 1 ) ] + C 2 = E τ ⁡ [ ln ⁡ ∏ n = 1 N τ 2 π e − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ ln ⁡ λ 0 τ 2 π e − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 2 = E τ ⁡ [ ∑ n = 1 N ( 1 2 ( ln ⁡ τ − ln ⁡ 2 π ) − ( x n − μ ) 2 τ 2 ) ] + E τ ⁡ [ 1 2 ( ln ⁡ λ 0 + ln ⁡ τ − ln ⁡ 2 π ) − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 2 = E τ ⁡ [ ∑ n = 1 N − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ − ( μ − μ 0 ) 2 λ 0 τ 2 ] + E τ ⁡ [ ∑ n = 1 N 1 2 ( ln ⁡ τ − ln ⁡ 2 π ) ] + E τ ⁡ [ 1 2 ( ln ⁡ λ 0 + ln ⁡ τ − ln ⁡ 2 π ) ] + C 2 = E τ ⁡ [ ∑ n = 1 N − ( x n − μ ) 2 τ 2 ] + E τ ⁡ [ − ( μ − μ 0 ) 2 λ 0 τ 2 ] + C 3 = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 } + C 3 {\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )+\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\mu \mid \tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\mathcal {N}}\left(x_{n}\mid \mu ,\tau ^{-1}\right)\right]+\operatorname {E} _{\tau }\left[\ln {\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\sqrt {\frac {\tau }{2\pi }}}e^{-{\frac {(x_{n}-\mu )^{2}\tau }{2}}}\right]+\operatorname {E} _{\tau }\left[\ln {\sqrt {\frac {\lambda _{0}\tau }{2\pi }}}e^{-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}\left({\frac {1}{2}}(\ln \tau -\ln 2\pi )-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right)\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}{\frac {1}{2}}(\ln \tau -\ln 2\pi )\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\end{aligned}}} In the above derivation, C {\displaystyle C} , C 2 {\displaystyle C_{2}} and C 3 {\displaystyle C_{3}} refer to values that are constant with respect to μ {\displaystyle \mu } . Note that the term E τ ⁡ [ ln ⁡ p ( τ ) ] {\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]} is not a function of μ {\displaystyle \mu } and will have the same value regardless of the value of μ {\displaystyle \mu } . Hence in line 3 we can absorb it into the constant term at the end. We do the same thing in line 7. The last line is simply a quadratic polynomial in μ {\displaystyle \mu } . Since this is the logarithm of q μ ∗ ( μ ) {\displaystyle q_{\mu }^{*}(\mu )} , we can see that q μ ∗ ( μ ) {\displaystyle q_{\mu }^{*}(\mu )} itself is a Gaussian distribution. With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involving μ {\displaystyle \mu } and μ 2 {\displaystyle \mu ^{2}} and completing the square over μ {\displaystyle \mu } ), we can derive the parameters of the Gaussian distribution: ln ⁡ q μ ∗ ( μ ) = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ∑ n = 1 N ( x n 2 − 2 x n μ + μ 2 ) + λ 0 ( μ 2 − 2 μ 0 μ + μ 0 2 ) } + C 3 = − E τ ⁡ [ τ ] 2 { ( ∑ n = 1 N x n 2 ) − 2 ( ∑ n = 1 N x n ) μ + ( ∑ n = 1 N μ 2 ) + λ 0 μ 2 − 2 λ 0 μ 0 μ + λ 0 μ 0 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 } + C 3 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) ( λ 0 + N ) μ } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ ) } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ + ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 − ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 ) } + C 4 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) μ + ( λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 ) } + C 5 = − E τ ⁡ [ τ ] 2 { ( λ 0 + N ) ( μ − λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 } + C 5 = − 1 2 ( λ 0 + N ) E τ ⁡ [ τ ] ( μ − λ 0 μ 0 + ∑ n = 1 N x n λ 0 + N ) 2 + C 5 {\displaystyle {\begin{aligned}\ln q_{\mu }^{*}(\mu )&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}^{2}-2x_{n}\mu +\mu ^{2})+\lambda _{0}(\mu ^{2}-2\mu _{0}\mu +\mu _{0}^{2})\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\left(\sum _{n=1}^{N}x_{n}^{2}\right)-2\left(\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}\mu ^{2}\right)+\lambda _{0}\mu ^{2}-2\lambda _{0}\mu _{0}\mu +\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)(\lambda _{0}+N)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu \right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}-\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{5}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right\}+C_{5}\\&=-{\frac {1}{2}}(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}+C_{5}\end{aligned}}} Note that all of the above steps can be shortened by using the formula for the sum of two quadratics. In other words: q μ ∗ ( μ ) ∼ N ( μ ∣ μ N , λ N − 1 ) μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) E τ ⁡ [ τ ] x ¯ = 1 N ∑ n = 1 N x n {\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}} === Derivation of q(τ) === The derivation of q τ ∗ ( τ ) {\displaystyle q_{\tau }^{*}(\tau )} is similar to above, although we omit some of the details for the sake of brevity. ln ⁡ q τ ∗ ( τ ) = E μ ⁡ [ ln ⁡ p ( X ∣ μ , τ ) + ln ⁡ p ( μ ∣ τ ) ] + ln ⁡ p ( τ ) + constant = ( a 0 − 1 ) ln ⁡ τ − b 0 τ + 1 2 ln ⁡ τ + N 2 ln ⁡ τ − τ 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] + constant {\displaystyle {\begin{aligned}\ln q_{\tau }^{*}(\tau )&=\operatorname {E} _{\mu }[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )]+\ln p(\tau )+{\text{constant}}\\&=(a_{0}-1)\ln \tau -b_{0}\tau +{\frac {1}{2}}\ln \tau +{\frac {N}{2}}\ln \tau -{\frac {\tau }{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]+{\text{constant}}\end{aligned}}} Exponentiating both sides, we can see that q τ ∗ ( τ ) {\displaystyle q_{\tau }^{*}(\tau )} is a gamma distribution. Specifically: q τ ∗ ( τ ) ∼ Gamma ⁡ ( τ ∣ a N , b N ) a N = a 0 + N + 1 2 b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] {\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}} === Algorithm for computing the parameters === Let us recap the conclusions from the previous sections: q μ ∗ ( μ ) ∼ N ( μ ∣ μ N , λ N − 1 ) μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) E τ ⁡ [ τ ] x ¯ = 1 N ∑ n = 1 N x n {\displaystyle {\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}} and q τ ∗ ( τ ) ∼ Gamma ⁡ ( τ ∣ a N , b N ) a N = a 0 + N + 1 2 b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] {\displaystyle {\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}} In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions: E ⁡ [ τ ∣ a N , b N ] = a N b N E ⁡ [ μ ∣ μ N , λ N − 1 ] = μ N E ⁡ [ X 2 ] = Var ⁡ ( X ) + ( E ⁡ [ X ] ) 2 E ⁡ [ μ 2 ∣ μ N , λ N − 1 ] = λ N − 1 + μ N 2 {\displaystyle {\begin{aligned}\operatorname {E} [\tau \mid a_{N},b_{N}]&={\frac {a_{N}}{b_{N}}}\\\operatorname {E} \left[\mu \mid \mu _{N},\lambda _{N}^{-1}\right]&=\mu _{N}\\\operatorname {E} \left[X^{2}\right]&=\operatorname {Var} (X)+(\operatorname {E} [X])^{2}\\\operatorname {E} \left[\mu ^{2}\mid \mu _{N},\lambda _{N}^{-1}\right]&=\lambda _{N}^{-1}+\mu _{N}^{2}\end{aligned}}} Applying these formulas to the above equations is trivial in most cases, but the equation for b N {\displaystyle b_{N}} takes more work: b N = b 0 + 1 2 E μ ⁡ [ ∑ n = 1 N ( x n − μ ) 2 + λ 0 ( μ − μ 0 ) 2 ] = b 0 + 1 2 E μ ⁡ [ ( λ 0 + N ) μ 2 − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] = b 0 + 1 2 [ ( λ 0 + N ) E μ ⁡ [ μ 2 ] − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) E μ ⁡ [ μ ] + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] = b 0 + 1 2 [ ( λ 0 + N ) ( λ N − 1 + μ N 2 ) − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ N + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] {\displaystyle {\begin{aligned}b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\\&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\operatorname {E} _{\mu }[\mu ^{2}]-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\operatorname {E} _{\mu }[\mu ]+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\\end{aligned}}} We can then write the parameter equations as follows, without any expectations: μ N = λ 0 μ 0 + N x ¯ λ 0 + N λ N = ( λ 0 + N ) a N b N x ¯ = 1 N ∑ n = 1 N x n a N = a 0 + N + 1 2 b N = b 0 + 1 2 [ ( λ 0 + N ) ( λ N − 1 + μ N 2 ) − 2 ( λ 0 μ 0 + ∑ n = 1 N x n ) μ N + ( ∑ n = 1 N x n 2 ) + λ 0 μ 0 2 ] {\displaystyle {\begin{aligned}\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N){\frac {a_{N}}{b_{N}}}\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\end{aligned}}} Note that there are circular dependencies among the formulas for λ N {\displaystyle \lambda _{N}} and b N {\displaystyle b_{N}} . This naturally suggests an EM-like algorithm: Compute ∑ n = 1 N x n {\displaystyle \sum _{n=1}^{N}x_{n}} and ∑ n = 1 N x n 2 . {\displaystyle \sum _{n=1}^{N}x_{n}^{2}.} Use these values to compute μ N {\displaystyle \mu _{N}} and a N . {\displaystyle a_{N}.} Initialize λ N {\displaystyle \lambda _{N}} to some arbitrary value. Use the current value of λ N , {\displaystyle \lambda _{N},} along with the known values of the other parameters, to compute b N {\displaystyle b_{N}} . Use the current value of b N , {\displaystyle b_{N},} along with the known values of the other parameters, to compute λ N {\displaystyle \lambda _{N}} . Repeat the last two steps until convergence (i.e. until neither value has changed more than some small amount). We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc. It can be shown that this algorithm is guaranteed to converge to a local maximum. Note also that the posterior distributions have the same form as the corresponding prior distributions. We did not assume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of the exponential family, which is the case for most of the standard distributions. == Further discussion == === Step-by-step recipe === The above example shows the method by which the variational-Bayesian approximation to a posterior probability density in a given Bayesian network is derived: Describe the network with a graphical model, identifying the observed variables (data) X {\displaystyle \mathbf {X} } and unobserved variables (parameters Θ {\displaystyle {\boldsymbol {\Theta }}} and latent variables Z {\displaystyle \mathbf {Z} } ) and their conditional probability distributions. Variational Bayes will then construct an approximation to the posterior probability p ( Z , Θ ∣ X ) {\displaystyle p(\mathbf {Z} ,{\boldsymbol {\Theta }}\mid \mathbf {X} )} . The approximation has the basic property that it is a factorized distribution, i.e. a product of two or more independent distributions over disjoint subsets of the unobserved variables. Partition the unobserved variables into two or more subsets, over which the independent factors will be derived. There is no universal procedure for doing this; creating too many subsets yields a poor approximation, while creating too few makes the entire variational Bayes procedure intractable. Typically, the first split is to separate the parameters and latent variables; often, this is enough by itself to produce a tractable result. Assume that the partitions are called Z 1 , … , Z M {\displaystyle \mathbf {Z} _{1},\ldots ,\mathbf {Z} _{M}} . For a given partition Z j {\displaystyle \mathbf {Z} _{j}} , write down the formula for the best approximating distribution q j ∗ ( Z j ∣ X ) {\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )} using the basic equation ln ⁡ q j ∗ ( Z j ∣ X ) = E i ≠ j ⁡ [ ln ⁡ p ( Z , X ) ] + constant {\displaystyle \ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{i\neq j}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}} . Fill in the formula for the joint probability distribution using the graphical model. Any component conditional distributions that don't involve any of the variables in Z j {\displaystyle \mathbf {Z} _{j}} can be ignored; they will be folded into the constant term. Simplify the formula and apply the expectation operator, following the above example. Ideally, this should simplify into expectations of basic functions of variables not in Z j {\displaystyle \mathbf {Z} _{j}} (e.g. first or second raw moments, expectation of a logarithm, etc.). In order for the variational Bayes procedure to work well, these expectations should generally be expressible analytically as functions of the parameters and/or hyperparameters of the distributions of these variables. In all cases, these expectation terms are constants with respect to the variables in the current partition. The functional form of the formula with respect to the variables in the current partition indicates the type of distribution. In particular, exponentiating the formula generates the probability density function (PDF) of the distribution (or at least, something proportional to it, with unknown normalization constant). In order for the overall method to be tractable, it should be possible to recognize the functional form as belonging to a known distribution. Significant mathematical manipulation may be required to convert the formula into a form that matches the PDF of a known distribution. When this can be done, the normalization constant can be reinstated by definition, and equations for the parameters of the known distribution can be derived by extracting the appropriate parts of the formula. When all expectations can be replaced analytically with functions of variables not in the current partition, and the PDF put into a form that allows identification with a known distribution, the result is a set of equations expressing the values of the optimum parameters as functions of the parameters of variables in other partitions. When this procedure can be applied to all partitions, the result is a set of mutually linked equations specifying the optimum values of all parameters. An expectation–maximization (EM) type procedure is then applied, picking an initial value for each parameter and the iterating through a series of steps, where at each step we cycle through the equations, updating each parameter in turn. This is guaranteed to converge. === Most important points === Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are: The idea of variational Bayes is to construct an analytical approximation to the posterior probability of the set of unobserved variables (parameters and latent variables), given the data. This means that the form of the solution is similar to other Bayesian inference methods, such as Gibbs sampling — i.e. a distribution that seeks to describe everything that is known about the variables. As in other Bayesian methods — but unlike e.g. in expectation–maximization (EM) or other maximum likelihood methods — both types of unobserved variables (i.e. parameters and latent variables) are treated the same, i.e. as random variables. Estimates for the variables can then be derived in the standard Bayesian ways, e.g. calculating the mean of the distribution to get a single point estimate or deriving a credible interval, highest density region, etc. "Analytical approximation" means that a formula can be written down for the posterior distribution. The formula generally consists of a product of well-known probability distributions, each of which factorizes over a set of unobserved variables (i.e. it is conditionally independent of the other variables, given the observed data). This formula is not the true posterior distribution, but an approximation to it; in particular, it will generally agree fairly closely in the lowest moments of the unobserved variables, e.g. the mean and variance. The result of all of the mathematical manipulations is (1) the identity of the probability distributions making up the factors, and (2) mutually dependent formulas for the parameters of these distributions. The actual values of these parameters are computed numerically, through an alternating iterative procedure much like EM. === Compared with expectation–maximization (EM) === Variational Bayes (VB) is often compared with expectation–maximization (EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations. However, there are a number of differences. Most important is what is being computed. EM computes point estimates of posterior distribution of those random variables that can be categorized as "parameters", but only estimates of the actual posterior distributions of the latent variables (at least in "soft EM", and often only when the latent variables are discrete). The point estimates computed are the modes of these parameters; no other information is available. VB, on the other hand, computes estimates of the actual posterior distribution of all variables, both parameters and latent variables. When point estimates need to be derived, generally the mean is used rather than the mode, as is normal in Bayesian inference. Concomitant with this, the parameters computed in VB do not have the same significance as those in EM. EM computes optimum values of the parameters of the Bayes network itself. VB computes optimum values of the parameters of the distributions used to approximate the parameters and latent variables of the Bayes network. For example, a typical Gaussian mixture model will have parameters for the mean and variance of each of the mixture components. EM would directly estimate optimum values for these parameters. VB, however, would first fit a distribution to these parameters — typically in the form of a prior distribution, e.g. a normal-scaled inverse gamma distribution — and would then compute values for the parameters of this prior distribution, i.e. essentially hyperparameters. In this case, VB would compute optimum estimates of the four parameters of the normal-scaled inverse gamma distribution that describes the joint distribution of the mean and variance of the component. == A more complex example == Imagine a Bayesian Gaussian mixture model described as follows: π ∼ SymDir ⁡ ( K , α 0 ) Λ i = 1 … K ∼ W ( W 0 , ν 0 ) μ i = 1 … K ∼ N ( μ 0 , ( β 0 Λ i ) − 1 ) z [ i = 1 … N ] ∼ Mult ⁡ ( 1 , π ) x i = 1 … N ∼ N ( μ z i , Λ z i − 1 ) K = number of mixing components N = number of data points {\displaystyle {\begin{aligned}\mathbf {\pi } &\sim \operatorname {SymDir} (K,\alpha _{0})\\\mathbf {\Lambda } _{i=1\dots K}&\sim {\mathcal {W}}(\mathbf {W} _{0},\nu _{0})\\\mathbf {\mu } _{i=1\dots K}&\sim {\mathcal {N}}(\mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{i})^{-1})\\\mathbf {z} [i=1\dots N]&\sim \operatorname {Mult} (1,\mathbf {\pi } )\\\mathbf {x} _{i=1\dots N}&\sim {\mathcal {N}}(\mathbf {\mu } _{z_{i}},{\mathbf {\Lambda } _{z_{i}}}^{-1})\\K&={\text{number of mixing components}}\\N&={\text{number of data points}}\end{aligned}}} Note: SymDir() is the symmetric Dirichlet distribution of dimension K {\displaystyle K} , with the hyperparameter for each component set to α 0 {\displaystyle \alpha _{0}} . The Dirichlet distribution is the conjugate prior of the categorical distribution or multinomial distribution. W ( ) {\displaystyle {\mathcal {W}}()} is the Wishart distribution, which is the conjugate prior of the precision matrix (inverse covariance matrix) for a multivariate Gaussian distribution. Mult() is a multinomial distribution over a single observation (equivalent to a categorical distribution). The state space is a "one-of-K" representation, i.e., a K {\displaystyle K} -dimensional vector in which one of the elements is 1 (specifying the identity of the observation) and all other elements are 0. N ( ) {\displaystyle {\mathcal {N}}()} is the Gaussian distribution, in this case specifically the multivariate Gaussian distribution. The interpretation of the above variables is as follows: X = { x 1 , … , x N } {\displaystyle \mathbf {X} =\{\mathbf {x} _{1},\dots ,\mathbf {x} _{N}\}} is the set of N {\displaystyle N} data points, each of which is a D {\displaystyle D} -dimensional vector distributed according to a multivariate Gaussian distribution. Z = { z 1 , … , z N } {\displaystyle \mathbf {Z} =\{\mathbf {z} _{1},\dots ,\mathbf {z} _{N}\}} is a set of latent variables, one per data point, specifying which mixture component the corresponding data point belongs to, using a "one-of-K" vector representation with components z n k {\displaystyle z_{nk}} for k = 1 … K {\displaystyle k=1\dots K} , as described above. π {\displaystyle \mathbf {\pi } } is the mixing proportions for the K {\displaystyle K} mixture components. μ i = 1 … K {\displaystyle \mathbf {\mu } _{i=1\dots K}} and Λ i = 1 … K {\displaystyle \mathbf {\Lambda } _{i=1\dots K}} specify the parameters (mean and precision) associated with each mixture component. The joint probability of all variables can be rewritten as p ( X , Z , π , μ , Λ ) = p ( X ∣ Z , μ , Λ ) p ( Z ∣ π ) p ( π ) p ( μ ∣ Λ ) p ( Λ ) {\displaystyle p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )p(\mathbf {Z} \mid \mathbf {\pi } )p(\mathbf {\pi } )p(\mathbf {\mu } \mid \mathbf {\Lambda } )p(\mathbf {\Lambda } )} where the individual factors are p ( X ∣ Z , μ , Λ ) = ∏ n = 1 N ∏ k = 1 K N ( x n ∣ μ k , Λ k − 1 ) z n k p ( Z ∣ π ) = ∏ n = 1 N ∏ k = 1 K π k z n k p ( π ) = Γ ( K α 0 ) Γ ( α 0 ) K ∏ k = 1 K π k α 0 − 1 p ( μ ∣ Λ ) = ∏ k = 1 K N ( μ k ∣ μ 0 , ( β 0 Λ k ) − 1 ) p ( Λ ) = ∏ k = 1 K W ( Λ k ∣ W 0 , ν 0 ) {\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})^{z_{nk}}\\p(\mathbf {Z} \mid \mathbf {\pi } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}\pi _{k}^{z_{nk}}\\p(\mathbf {\pi } )&={\frac {\Gamma (K\alpha _{0})}{\Gamma (\alpha _{0})^{K}}}\prod _{k=1}^{K}\pi _{k}^{\alpha _{0}-1}\\p(\mathbf {\mu } \mid \mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{k})^{-1})\\p(\mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{0},\nu _{0})\end{aligned}}} where N ( x ∣ μ , Σ ) = 1 ( 2 π ) D / 2 1 | Σ | 1 / 2 exp ⁡ { − 1 2 ( x − μ ) T Σ − 1 ( x − μ ) } W ( Λ ∣ W , ν ) = B ( W , ν ) | Λ | ( ν − D − 1 ) / 2 exp ⁡ ( − 1 2 Tr ⁡ ( W − 1 Λ ) ) B ( W , ν ) = | W | − ν / 2 { 2 ν D / 2 π D ( D − 1 ) / 4 ∏ i = 1 D Γ ( ν + 1 − i 2 ) } − 1 D = dimensionality of each data point {\displaystyle {\begin{aligned}{\mathcal {N}}(\mathbf {x} \mid \mathbf {\mu } ,\mathbf {\Sigma } )&={\frac {1}{(2\pi )^{D/2}}}{\frac {1}{|\mathbf {\Sigma } |^{1/2}}}\exp \left\{-{\frac {1}{2}}(\mathbf {x} -\mathbf {\mu } )^{\rm {T}}\mathbf {\Sigma } ^{-1}(\mathbf {x} -\mathbf {\mu } )\right\}\\{\mathcal {W}}(\mathbf {\Lambda } \mid \mathbf {W} ,\nu )&=B(\mathbf {W} ,\nu )|\mathbf {\Lambda } |^{(\nu -D-1)/2}\exp \left(-{\frac {1}{2}}\operatorname {Tr} (\mathbf {W} ^{-1}\mathbf {\Lambda } )\right)\\B(\mathbf {W} ,\nu )&=|\mathbf {W} |^{-\nu /2}\left\{2^{\nu D/2}\pi ^{D(D-1)/4}\prod _{i=1}^{D}\Gamma \left({\frac {\nu +1-i}{2}}\right)\right\}^{-1}\\D&={\text{dimensionality of each data point}}\end{aligned}}} Assume that q ( Z , π , μ , Λ ) = q ( Z ) q ( π , μ , Λ ) {\displaystyle q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )} . Then ln ⁡ q ∗ ( Z ) = E π , μ , Λ ⁡ [ ln ⁡ p ( X , Z , π , μ , Λ ) ] + constant = E π ⁡ [ ln ⁡ p ( Z ∣ π ) ] + E μ , Λ ⁡ [ ln ⁡ p ( X ∣ Z , μ , Λ ) ] + constant = ∑ n = 1 N ∑ k = 1 K z n k ln ⁡ ρ n k + constant {\displaystyle {\begin{aligned}\ln q^{*}(\mathbf {Z} )&=\operatorname {E} _{\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\operatorname {E} _{\mathbf {\pi } }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+\operatorname {E} _{\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\sum _{n=1}^{N}\sum _{k=1}^{K}z_{nk}\ln \rho _{nk}+{\text{constant}}\end{aligned}}} where we have defined ln ⁡ ρ n k = E ⁡ [ ln ⁡ π k ] + 1 2 E ⁡ [ ln ⁡ | Λ k | ] − D 2 ln ⁡ ( 2 π ) − 1 2 E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] {\displaystyle \ln \rho _{nk}=\operatorname {E} [\ln \pi _{k}]+{\frac {1}{2}}\operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]-{\frac {D}{2}}\ln(2\pi )-{\frac {1}{2}}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]} Exponentiating both sides of the formula for ln ⁡ q ∗ ( Z ) {\displaystyle \ln q^{*}(\mathbf {Z} )} yields q ∗ ( Z ) ∝ ∏ n = 1 N ∏ k = 1 K ρ n k z n k {\displaystyle q^{*}(\mathbf {Z} )\propto \prod _{n=1}^{N}\prod _{k=1}^{K}\rho _{nk}^{z_{nk}}} Requiring that this be normalized ends up requiring that the ρ n k {\displaystyle \rho _{nk}} sum to 1 over all values of k {\displaystyle k} , yielding q ∗ ( Z ) = ∏ n = 1 N ∏ k = 1 K r n k z n k {\displaystyle q^{*}(\mathbf {Z} )=\prod _{n=1}^{N}\prod _{k=1}^{K}r_{nk}^{z_{nk}}} where r n k = ρ n k ∑ j = 1 K ρ n j {\displaystyle r_{nk}={\frac {\rho _{nk}}{\sum _{j=1}^{K}\rho _{nj}}}} In other words, q ∗ ( Z ) {\displaystyle q^{*}(\mathbf {Z} )} is a product of single-observation multinomial distributions, and factors over each individual z n {\displaystyle \mathbf {z} _{n}} , which is distributed as a single-observation multinomial distribution with parameters r n k {\displaystyle r_{nk}} for k = 1 … K {\displaystyle k=1\dots K} . Furthermore, we note that E ⁡ [ z n k ] = r n k {\displaystyle \operatorname {E} [z_{nk}]=r_{nk}\,} which is a standard result for categorical distributions. Now, considering the factor q ( π , μ , Λ ) {\displaystyle q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )} , note that it automatically factors into q ( π ) ∏ k = 1 K q ( μ k , Λ k ) {\displaystyle q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})} due to the structure of the graphical model defining our Gaussian mixture model, which is specified above. Then, ln ⁡ q ∗ ( π ) = ln ⁡ p ( π ) + E Z ⁡ [ ln ⁡ p ( Z ∣ π ) ] + constant = ( α 0 − 1 ) ∑ k = 1 K ln ⁡ π k + ∑ n = 1 N ∑ k = 1 K r n k ln ⁡ π k + constant {\displaystyle {\begin{aligned}\ln q^{*}(\mathbf {\pi } )&=\ln p(\mathbf {\pi } )+\operatorname {E} _{\mathbf {Z} }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+{\text{constant}}\\&=(\alpha _{0}-1)\sum _{k=1}^{K}\ln \pi _{k}+\sum _{n=1}^{N}\sum _{k=1}^{K}r_{nk}\ln \pi _{k}+{\text{constant}}\end{aligned}}} Taking the exponential of both sides, we recognize q ∗ ( π ) {\displaystyle q^{*}(\mathbf {\pi } )} as a Dirichlet distribution q ∗ ( π ) ∼ Dir ⁡ ( α ) {\displaystyle q^{*}(\mathbf {\pi } )\sim \operatorname {Dir} (\mathbf {\alpha } )\,} where α k = α 0 + N k {\displaystyle \alpha _{k}=\alpha _{0}+N_{k}\,} where N k = ∑ n = 1 N r n k {\displaystyle N_{k}=\sum _{n=1}^{N}r_{nk}\,} Finally ln ⁡ q ∗ ( μ k , Λ k ) = ln ⁡ p ( μ k , Λ k ) + ∑ n = 1 N E ⁡ [ z n k ] ln ⁡ N ( x n ∣ μ k , Λ k − 1 ) + constant {\displaystyle \ln q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})=\ln p(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})+\sum _{n=1}^{N}\operatorname {E} [z_{nk}]\ln {\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})+{\text{constant}}} Grouping and reading off terms involving μ k {\displaystyle \mathbf {\mu } _{k}} and Λ k {\displaystyle \mathbf {\Lambda } _{k}} , the result is a Gaussian-Wishart distribution given by q ∗ ( μ k , Λ k ) = N ( μ k ∣ m k , ( β k Λ k ) − 1 ) W ( Λ k ∣ W k , ν k ) {\displaystyle q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})={\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {m} _{k},(\beta _{k}\mathbf {\Lambda } _{k})^{-1}){\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{k},\nu _{k})} given the definitions β k = β 0 + N k m k = 1 β k ( β 0 μ 0 + N k x ¯ k ) W k − 1 = W 0 − 1 + N k S k + β 0 N k β 0 + N k ( x ¯ k − μ 0 ) ( x ¯ k − μ 0 ) T ν k = ν 0 + N k N k = ∑ n = 1 N r n k x ¯ k = 1 N k ∑ n = 1 N r n k x n S k = 1 N k ∑ n = 1 N r n k ( x n − x ¯ k ) ( x n − x ¯ k ) T {\displaystyle {\begin{aligned}\beta _{k}&=\beta _{0}+N_{k}\\\mathbf {m} _{k}&={\frac {1}{\beta _{k}}}(\beta _{0}\mathbf {\mu } _{0}+N_{k}{\bar {\mathbf {x} }}_{k})\\\mathbf {W} _{k}^{-1}&=\mathbf {W} _{0}^{-1}+N_{k}\mathbf {S} _{k}+{\frac {\beta _{0}N_{k}}{\beta _{0}+N_{k}}}({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})^{\rm {T}}\\\nu _{k}&=\nu _{0}+N_{k}\\N_{k}&=\sum _{n=1}^{N}r_{nk}\\{\bar {\mathbf {x} }}_{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}\mathbf {x} _{n}\\\mathbf {S} _{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})^{\rm {T}}\end{aligned}}} Finally, notice that these functions require the values of r n k {\displaystyle r_{nk}} , which make use of ρ n k {\displaystyle \rho _{nk}} , which is defined in turn based on E ⁡ [ ln ⁡ π k ] {\displaystyle \operatorname {E} [\ln \pi _{k}]} , E ⁡ [ ln ⁡ | Λ k | ] {\displaystyle \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]} , and E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] {\displaystyle \operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]} . Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them: E μ k , Λ k ⁡ [ ( x n − μ k ) T Λ k ( x n − μ k ) ] = D β k − 1 + ν k ( x n − m k ) T W k ( x n − m k ) ln ⁡ Λ ~ k ≡ E ⁡ [ ln ⁡ | Λ k | ] = ∑ i = 1 D ψ ( ν k + 1 − i 2 ) + D ln ⁡ 2 + ln ⁡ | W k | ln ⁡ π ~ k ≡ E ⁡ [ ln ⁡ | π k | ] = ψ ( α k ) − ψ ( ∑ i = 1 K α i ) {\displaystyle {\begin{aligned}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]&=D\beta _{k}^{-1}+\nu _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\\\ln {\widetilde {\Lambda }}_{k}&\equiv \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]=\sum _{i=1}^{D}\psi \left({\frac {\nu _{k}+1-i}{2}}\right)+D\ln 2+\ln |\mathbf {W} _{k}|\\\ln {\widetilde {\pi }}_{k}&\equiv \operatorname {E} \left[\ln |\pi _{k}|\right]=\psi (\alpha _{k})-\psi \left(\sum _{i=1}^{K}\alpha _{i}\right)\end{aligned}}} These results lead to r n k ∝ π ~ k Λ ~ k 1 / 2 exp ⁡ { − D 2 β k − ν k 2 ( x n − m k ) T W k ( x n − m k ) } {\displaystyle r_{nk}\propto {\widetilde {\pi }}_{k}{\widetilde {\Lambda }}_{k}^{1/2}\exp \left\{-{\frac {D}{2\beta _{k}}}-{\frac {\nu _{k}}{2}}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\right\}} These can be converted from proportional to absolute values by normalizing over k {\displaystyle k} so that the corresponding values sum to 1. Note that: The update equations for the parameters β k {\displaystyle \beta _{k}} , m k {\displaystyle \mathbf {m} _{k}} , W k {\displaystyle \mathbf {W} _{k}} and ν k {\displaystyle \nu _{k}} of the variables μ k {\displaystyle \mathbf {\mu } _{k}} and Λ k {\displaystyle \mathbf {\Lambda } _{k}} depend on the statistics N k {\displaystyle N_{k}} , x ¯ k {\displaystyle {\bar {\mathbf {x} }}_{k}} , and S k {\displaystyle \mathbf {S} _{k}} , and these statistics in turn depend on r n k {\displaystyle r_{nk}} . The update equations for the parameters α 1 … K {\displaystyle \alpha _{1\dots K}} of the variable π {\displaystyle \mathbf {\pi } } depend on the statistic N k {\displaystyle N_{k}} , which depends in turn on r n k {\displaystyle r_{nk}} . The update equation for r n k {\displaystyle r_{nk}} has a direct circular dependence on β k {\displaystyle \beta _{k}} , m k {\displaystyle \mathbf {m} _{k}} , W k {\displaystyle \mathbf {W} _{k}} and ν k {\displaystyle \nu _{k}} as well as an indirect circular dependence on W k {\displaystyle \mathbf {W} _{k}} , ν k {\displaystyle \nu _{k}} and α 1 … K {\displaystyle \alpha _{1\dots K}} through π ~ k {\displaystyle {\widetilde {\pi }}_{k}} and Λ ~ k {\displaystyle {\widetilde {\Lambda }}_{k}} . This suggests an iterative procedure that alternates between two steps: An E-step that computes the value of r n k {\displaystyle r_{nk}} using the current values of all the other parameters. An M-step that uses the new value of r n k {\displaystyle r_{nk}} to compute new values of all the other parameters. Note that these steps correspond closely with the standard EM algorithm to derive a maximum likelihood or maximum a posteriori (MAP) solution for the parameters of a Gaussian mixture model. The responsibilities r n k {\displaystyle r_{nk}} in the E step correspond closely to the posterior probabilities of the latent variables given the data, i.e. p ( Z ∣ X ) {\displaystyle p(\mathbf {Z} \mid \mathbf {X} )} ; the computation of the statistics N k {\displaystyle N_{k}} , x ¯ k {\displaystyle {\bar {\mathbf {x} }}_{k}} , and S k {\displaystyle \mathbf {S} _{k}} corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model. == Exponential-family distributions == Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from the exponential family. == See also == Variational message passing: a modular algorithm for variational Bayesian inference. Variational autoencoder: an artificial neural network belonging to the families of probabilistic graphical models and Variational Bayesian methods. Expectation–maximization algorithm: a related approach which corresponds to a special case of variational Bayesian inference. Generalized filtering: a variational filtering scheme for nonlinear state space models. Calculus of variations: the field of mathematical analysis that deals with maximizing or minimizing functionals. Maximum entropy discrimination: This is a variational inference framework that allows for introducing and accounting for additional large-margin constraints == References == == External links == The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay provides an introduction to variational methods (p. 422). A Tutorial on Variational Bayes. Fox, C. and Roberts, S. 2012. Artificial Intelligence Review, doi:10.1007/s10462-011-9236-8. Variational-Bayes Repository A repository of research papers, software, and links related to the use of variational methods for approximate Bayesian learning up to 2003. Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs. High-Level Explanation of Variational Inference by Jason Eisner may be worth reading before a more mathematically detailed treatment. Copula Variational Bayes inference via information geometry (pdf) by Tran, V.H. 2018. This paper is primarily written for students. Via Bregman divergence, the paper shows that Variational Bayes is simply a generalized Pythagorean projection of true model onto an arbitrarily correlated (copula) distributional space, of which the independent space is merely a special case. An in depth introduction to Variational Bayes note. Nguyen, D. 2023
Wikipedia/Variational_Bayesian_methods
In mathematics, the direct method in the calculus of variations is a general method for constructing a proof of the existence of a minimizer for a given functional, introduced by Stanisław Zaremba and David Hilbert around 1900. The method relies on methods of functional analysis and topology. As well as being used to prove the existence of a solution, direct methods may be used to compute the solution to desired accuracy. == The method == The calculus of variations deals with functionals J : V → R ¯ {\displaystyle J:V\to {\bar {\mathbb {R} }}} , where V {\displaystyle V} is some function space and R ¯ = R ∪ { ∞ } {\displaystyle {\bar {\mathbb {R} }}=\mathbb {R} \cup \{\infty \}} . The main interest of the subject is to find minimizers for such functionals, that is, functions v ∈ V {\displaystyle v\in V} such that J ( v ) ≤ J ( u ) {\displaystyle J(v)\leq J(u)} for all u ∈ V {\displaystyle u\in V} . The standard tool for obtaining necessary conditions for a function to be a minimizer is the Euler–Lagrange equation. But seeking a minimizer amongst functions satisfying these may lead to false conclusions if the existence of a minimizer is not established beforehand. The functional J {\displaystyle J} must be bounded from below to have a minimizer. This means inf { J ( u ) | u ∈ V } > − ∞ . {\displaystyle \inf\{J(u)|u\in V\}>-\infty .\,} This condition is not enough to know that a minimizer exists, but it shows the existence of a minimizing sequence, that is, a sequence ( u n ) {\displaystyle (u_{n})} in V {\displaystyle V} such that J ( u n ) → inf { J ( u ) | u ∈ V } . {\displaystyle J(u_{n})\to \inf\{J(u)|u\in V\}.} The direct method may be broken into the following steps Take a minimizing sequence ( u n ) {\displaystyle (u_{n})} for J {\displaystyle J} . Show that ( u n ) {\displaystyle (u_{n})} admits some subsequence ( u n k ) {\displaystyle (u_{n_{k}})} , that converges to a u 0 ∈ V {\displaystyle u_{0}\in V} with respect to a topology τ {\displaystyle \tau } on V {\displaystyle V} . Show that J {\displaystyle J} is sequentially lower semi-continuous with respect to the topology τ {\displaystyle \tau } . To see that this shows the existence of a minimizer, consider the following characterization of sequentially lower-semicontinuous functions. The function J {\displaystyle J} is sequentially lower-semicontinuous if lim inf n → ∞ J ( u n ) ≥ J ( u 0 ) {\displaystyle \liminf _{n\to \infty }J(u_{n})\geq J(u_{0})} for any convergent sequence u n → u 0 {\displaystyle u_{n}\to u_{0}} in V {\displaystyle V} . The conclusions follows from inf { J ( u ) | u ∈ V } = lim n → ∞ J ( u n ) = lim k → ∞ J ( u n k ) ≥ J ( u 0 ) ≥ inf { J ( u ) | u ∈ V } {\displaystyle \inf\{J(u)|u\in V\}=\lim _{n\to \infty }J(u_{n})=\lim _{k\to \infty }J(u_{n_{k}})\geq J(u_{0})\geq \inf\{J(u)|u\in V\}} , in other words J ( u 0 ) = inf { J ( u ) | u ∈ V } {\displaystyle J(u_{0})=\inf\{J(u)|u\in V\}} . == Details == === Banach spaces === The direct method may often be applied with success when the space V {\displaystyle V} is a subset of a separable reflexive Banach space W {\displaystyle W} . In this case the sequential Banach–Alaoglu theorem implies that any bounded sequence ( u n ) {\displaystyle (u_{n})} in V {\displaystyle V} has a subsequence that converges to some u 0 {\displaystyle u_{0}} in W {\displaystyle W} with respect to the weak topology. If V {\displaystyle V} is sequentially closed in W {\displaystyle W} , so that u 0 {\displaystyle u_{0}} is in V {\displaystyle V} , the direct method may be applied to a functional J : V → R ¯ {\displaystyle J:V\to {\bar {\mathbb {R} }}} by showing J {\displaystyle J} is bounded from below, any minimizing sequence for J {\displaystyle J} is bounded, and J {\displaystyle J} is weakly sequentially lower semi-continuous, i.e., for any weakly convergent sequence u n → u 0 {\displaystyle u_{n}\to u_{0}} it holds that lim inf n → ∞ J ( u n ) ≥ J ( u 0 ) {\displaystyle \liminf _{n\to \infty }J(u_{n})\geq J(u_{0})} . The second part is usually accomplished by showing that J {\displaystyle J} admits some growth condition. An example is J ( x ) ≥ α ‖ x ‖ q − β {\displaystyle J(x)\geq \alpha \lVert x\rVert ^{q}-\beta } for some α > 0 {\displaystyle \alpha >0} , q ≥ 1 {\displaystyle q\geq 1} and β ≥ 0 {\displaystyle \beta \geq 0} . A functional with this property is sometimes called coercive. Showing sequential lower semi-continuity is usually the most difficult part when applying the direct method. See below for some theorems for a general class of functionals. === Sobolev spaces === The typical functional in the calculus of variations is an integral of the form J ( u ) = ∫ Ω F ( x , u ( x ) , ∇ u ( x ) ) d x {\displaystyle J(u)=\int _{\Omega }F(x,u(x),\nabla u(x))dx} where Ω {\displaystyle \Omega } is a subset of R n {\displaystyle \mathbb {R} ^{n}} and F {\displaystyle F} is a real-valued function on Ω × R m × R m n {\displaystyle \Omega \times \mathbb {R} ^{m}\times \mathbb {R} ^{mn}} . The argument of J {\displaystyle J} is a differentiable function u : Ω → R m {\displaystyle u:\Omega \to \mathbb {R} ^{m}} , and its Jacobian ∇ u ( x ) {\displaystyle \nabla u(x)} is identified with a m n {\displaystyle mn} -vector. When deriving the Euler–Lagrange equation, the common approach is to assume Ω {\displaystyle \Omega } has a C 2 {\displaystyle C^{2}} boundary and let the domain of definition for J {\displaystyle J} be C 2 ( Ω , R m ) {\displaystyle C^{2}(\Omega ,\mathbb {R} ^{m})} . This space is a Banach space when endowed with the supremum norm, but it is not reflexive. When applying the direct method, the functional is usually defined on a Sobolev space W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} with p > 1 {\displaystyle p>1} , which is a reflexive Banach space. The derivatives of u {\displaystyle u} in the formula for J {\displaystyle J} must then be taken as weak derivatives. Another common function space is W g 1 , p ( Ω , R m ) {\displaystyle W_{g}^{1,p}(\Omega ,\mathbb {R} ^{m})} which is the affine sub space of W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} of functions whose trace is some fixed function g {\displaystyle g} in the image of the trace operator. This restriction allows finding minimizers of the functional J {\displaystyle J} that satisfy some desired boundary conditions. This is similar to solving the Euler–Lagrange equation with Dirichlet boundary conditions. Additionally there are settings in which there are minimizers in W g 1 , p ( Ω , R m ) {\displaystyle W_{g}^{1,p}(\Omega ,\mathbb {R} ^{m})} but not in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} . The idea of solving minimization problems while restricting the values on the boundary can be further generalized by looking on function spaces where the trace is fixed only on a part of the boundary, and can be arbitrary on the rest. The next section presents theorems regarding weak sequential lower semi-continuity of functionals of the above type. == Sequential lower semi-continuity of integrals == As many functionals in the calculus of variations are of the form J ( u ) = ∫ Ω F ( x , u ( x ) , ∇ u ( x ) ) d x {\displaystyle J(u)=\int _{\Omega }F(x,u(x),\nabla u(x))dx} , where Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} is open, theorems characterizing functions F {\displaystyle F} for which J {\displaystyle J} is weakly sequentially lower-semicontinuous in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} with p ≥ 1 {\displaystyle p\geq 1} is of great importance. In general one has the following: Assume that F {\displaystyle F} is a function that has the following properties: The function F {\displaystyle F} is a Carathéodory function. There exist a ∈ L q ( Ω , R m n ) {\displaystyle a\in L^{q}(\Omega ,\mathbb {R} ^{mn})} with Hölder conjugate q = p p − 1 {\displaystyle q={\tfrac {p}{p-1}}} and b ∈ L 1 ( Ω ) {\displaystyle b\in L^{1}(\Omega )} such that the following inequality holds true for almost every x ∈ Ω {\displaystyle x\in \Omega } and every ( y , A ) ∈ R m × R m n {\displaystyle (y,A)\in \mathbb {R} ^{m}\times \mathbb {R} ^{mn}} : F ( x , y , A ) ≥ ⟨ a ( x ) , A ⟩ + b ( x ) {\displaystyle F(x,y,A)\geq \langle a(x),A\rangle +b(x)} . Here, ⟨ a ( x ) , A ⟩ {\displaystyle \langle a(x),A\rangle } denotes the Frobenius inner product of a ( x ) {\displaystyle a(x)} and A {\displaystyle A} in R m n {\displaystyle \mathbb {R} ^{mn}} ). If the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex for almost every x ∈ Ω {\displaystyle x\in \Omega } and every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} , then J {\displaystyle J} is sequentially weakly lower semi-continuous. When n = 1 {\displaystyle n=1} or m = 1 {\displaystyle m=1} the following converse-like theorem holds Assume that F {\displaystyle F} is continuous and satisfies | F ( x , y , A ) | ≤ a ( x , | y | , | A | ) {\displaystyle |F(x,y,A)|\leq a(x,|y|,|A|)} for every ( x , y , A ) {\displaystyle (x,y,A)} , and a fixed function a ( x , | y | , | A | ) {\displaystyle a(x,|y|,|A|)} increasing in | y | {\displaystyle |y|} and | A | {\displaystyle |A|} , and locally integrable in x {\displaystyle x} . If J {\displaystyle J} is sequentially weakly lower semi-continuous, then for any given ( x , y ) ∈ Ω × R m {\displaystyle (x,y)\in \Omega \times \mathbb {R} ^{m}} the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex. In conclusion, when m = 1 {\displaystyle m=1} or n = 1 {\displaystyle n=1} , the functional J {\displaystyle J} , assuming reasonable growth and boundedness on F {\displaystyle F} , is weakly sequentially lower semi-continuous if, and only if the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex. However, there are many interesting cases where one cannot assume that F {\displaystyle F} is convex. The following theorem proves sequential lower semi-continuity using a weaker notion of convexity: Assume that F : Ω × R m × R m n → [ 0 , ∞ ) {\displaystyle F:\Omega \times \mathbb {R} ^{m}\times \mathbb {R} ^{mn}\to [0,\infty )} is a function that has the following properties: The function F {\displaystyle F} is a Carathéodory function. The function F {\displaystyle F} has p {\displaystyle p} -growth for some p > 1 {\displaystyle p>1} : There exists a constant C {\displaystyle C} such that for every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} and for almost every x ∈ Ω {\displaystyle x\in \Omega } | F ( x , y , A ) | ≤ C ( 1 + | y | p + | A | p ) {\displaystyle |F(x,y,A)|\leq C(1+|y|^{p}+|A|^{p})} . For every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} and for almost every x ∈ Ω {\displaystyle x\in \Omega } , the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is quasiconvex: there exists a cube D ⊆ R n {\displaystyle D\subseteq \mathbb {R} ^{n}} such that for every A ∈ R m n , φ ∈ W 0 1 , ∞ ( Ω , R m ) {\displaystyle A\in \mathbb {R} ^{mn},\varphi \in W_{0}^{1,\infty }(\Omega ,\mathbb {R} ^{m})} it holds: F ( x , y , A ) ≤ | D | − 1 ∫ D F ( x , y , A + ∇ φ ( z ) ) d z {\displaystyle F(x,y,A)\leq |D|^{-1}\int _{D}F(x,y,A+\nabla \varphi (z))dz} where | D | {\displaystyle |D|} is the volume of D {\displaystyle D} . Then J {\displaystyle J} is sequentially weakly lower semi-continuous in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} . A converse like theorem in this case is the following: Assume that F {\displaystyle F} is continuous and satisfies | F ( x , y , A ) | ≤ a ( x , | y | , | A | ) {\displaystyle |F(x,y,A)|\leq a(x,|y|,|A|)} for every ( x , y , A ) {\displaystyle (x,y,A)} , and a fixed function a ( x , | y | , | A | ) {\displaystyle a(x,|y|,|A|)} increasing in | y | {\displaystyle |y|} and | A | {\displaystyle |A|} , and locally integrable in x {\displaystyle x} . If J {\displaystyle J} is sequentially weakly lower semi-continuous, then for any given ( x , y ) ∈ Ω × R m {\displaystyle (x,y)\in \Omega \times \mathbb {R} ^{m}} the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is quasiconvex. The claim is true even when both m , n {\displaystyle m,n} are bigger than 1 {\displaystyle 1} and coincides with the previous claim when m = 1 {\displaystyle m=1} or n = 1 {\displaystyle n=1} , since then quasiconvexity is equivalent to convexity. == Notes == == References and further reading == Dacorogna, Bernard (1989). Direct Methods in the Calculus of Variations. Springer-Verlag. ISBN 0-387-50491-5. Fonseca, Irene; Giovanni Leoni (2007). Modern Methods in the Calculus of Variations: L p {\displaystyle L^{p}} Spaces. Springer. ISBN 978-0-387-35784-3. Morrey, C. B., Jr.: Multiple Integrals in the Calculus of Variations. Springer, 1966 (reprinted 2008), Berlin ISBN 978-3-540-69915-6. Jindřich Nečas: Direct Methods in the Theory of Elliptic Equations. (Transl. from French original 1967 by A.Kufner and G.Tronel), Springer, 2012, ISBN 978-3-642-10455-8. T. Roubíček (2000). "Direct method for parabolic problems". Adv. Math. Sci. Appl. Vol. 10. pp. 57–65. MR 1769181. Acerbi Emilio, Fusco Nicola. "Semicontinuity problems in the calculus of variations." Archive for Rational Mechanics and Analysis 86.2 (1984): 125-145
Wikipedia/Direct_method_in_calculus_of_variations
The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action. Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology. == History == The calculus of variations began with the work of Isaac Newton, such as with Newton's minimal resistance problem, which he formulated and solved in 1685, and later published in his Principia in 1687, which was the first problem in the field to be formulated and correctly solved, and was also one of the most difficult problems tackled by variational methods prior to the twentieth century. This problem was followed by the brachistochrone curve problem raised by Johann Bernoulli (1696), which was similar to one raised by Galileo Galilei in 1638, but he did not solve the problem explicity nor did he use the methods based on calculus. Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems. The problem would immediately occupy the attention of Jacob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Joseph-Louis Lagrange was influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum. Adrien-Marie Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject. To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Pierre Frédéric Sarrus (1842) which was condensed and improved by Augustin-Louis Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), John Hewitt Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Karl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development. In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an alternative to the calculus of variations. == Extrema == The calculus of variations is concerned with the maxima or minima (collectively called extrema) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements y {\displaystyle y} of a given function space defined over a given domain. A functional J [ y ] {\displaystyle J[y]} is said to have an extremum at the function f {\displaystyle f} if Δ J = J [ y ] − J [ f ] {\displaystyle \Delta J=J[y]-J[f]} has the same sign for all y {\displaystyle y} in an arbitrarily small neighborhood of f . {\displaystyle f.} The function f {\displaystyle f} is called an extremal function or extremal. The extremum J [ f ] {\displaystyle J[f]} is called a local maximum if Δ J ≤ 0 {\displaystyle \Delta J\leq 0} everywhere in an arbitrarily small neighborhood of f , {\displaystyle f,} and a local minimum if Δ J ≥ 0 {\displaystyle \Delta J\geq 0} there. For a function space of continuous functions, extrema of corresponding functionals are called strong extrema or weak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. == Euler–Lagrange equation == Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x , {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx,} where x 1 , x 2 {\displaystyle x_{1},x_{2}} are constants, y ( x ) {\displaystyle y(x)} is twice continuously differentiable, y ′ ( x ) = d y d x , {\displaystyle y'(x)={\frac {dy}{dx}},} L ( x , y ( x ) , y ′ ( x ) ) {\displaystyle L\left(x,y(x),y'(x)\right)} is twice continuously differentiable with respect to its arguments x , y , {\displaystyle x,y,} and y ′ . {\displaystyle y'.} If the functional J [ y ] {\displaystyle J[y]} attains a local minimum at f , {\displaystyle f,} and η ( x ) {\displaystyle \eta (x)} is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 {\displaystyle x_{1}} and x 2 , {\displaystyle x_{2},} then for any number ε {\displaystyle \varepsilon } close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term ε η {\displaystyle \varepsilon \eta } is called the variation of the function f {\displaystyle f} and is denoted by δ f . {\displaystyle \delta f.} Substituting f + ε η {\displaystyle f+\varepsilon \eta } for y {\displaystyle y} in the functional J [ y ] , {\displaystyle J[y],} the result is a function of ε , {\displaystyle \varepsilon ,} Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J [ y ] {\displaystyle J[y]} has a minimum for y = f {\displaystyle y=f} the function Φ ( ε ) {\displaystyle \Phi (\varepsilon )} has a minimum at ε = 0 {\displaystyle \varepsilon =0} and thus, Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L [ x , y , y ′ ] , {\displaystyle L\left[x,y,y'\right],} where y = f + ε η {\displaystyle y=f+\varepsilon \eta } and y ′ = f ′ + ε η ′ {\displaystyle y'=f'+\varepsilon \eta '} are considered as functions of ε {\displaystyle \varepsilon } rather than x , {\displaystyle x,} yields d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and because d y d ε = η {\displaystyle {\frac {dy}{d\varepsilon }}=\eta } and d y ′ d ε = η ′ , {\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',} d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ . {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.} Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ∂ L ∂ f η d x + ∂ L ∂ f ′ η | x 1 x 2 − ∫ x 1 x 2 η d d x ∂ L ∂ f ′ d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}} where L [ x , y , y ′ ] → L [ x , f , f ′ ] {\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]} when ε = 0 {\displaystyle \varepsilon =0} and we have used integration by parts on the second term. The second term on the second line vanishes because η = 0 {\displaystyle \eta =0} at x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( x ) ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J [ f ] {\displaystyle J[f]} and is denoted δ J {\displaystyle \delta J} or δ f ( x ) . {\displaystyle \delta f(x).} In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f ( x ) . {\displaystyle f(x).} The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J [ f ] . {\displaystyle J[f].} A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. === Example === In order to illustrate this process, consider the problem of finding the extremal function y = f ( x ) , {\displaystyle y=f(x),} which is the shortest curve that connects two points ( x 1 , y 1 ) {\displaystyle \left(x_{1},y_{1}\right)} and ( x 2 , y 2 ) . {\displaystyle \left(x_{2},y_{2}\right).} The arc length of the curve is given by A [ y ] = ∫ x 1 x 2 1 + [ y ′ ( x ) ] 2 d x , {\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,} with y ′ ( x ) = d y d x , y 1 = f ( x 1 ) , y 2 = f ( x 2 ) . {\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.} Note that assuming y is a function of x loses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes. The Euler–Lagrange equation will now be used to find the extremal function f ( x ) {\displaystyle f(x)} that minimizes the functional A [ y ] . {\displaystyle A[y].} ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} with L = 1 + [ f ′ ( x ) ] 2 . {\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.} Since f {\displaystyle f} does not appear explicitly in L , {\displaystyle L,} the first term in the Euler–Lagrange equation vanishes for all f ( x ) {\displaystyle f(x)} and thus, d d x ∂ L ∂ f ′ = 0 . {\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.} Substituting for L {\displaystyle L} and taking the derivative, d d x f ′ ( x ) 1 + [ f ′ ( x ) ] 2 = 0 . {\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.} Thus f ′ ( x ) 1 + [ f ′ ( x ) ] 2 = c , {\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,} for some constant c {\displaystyle c} . Then [ f ′ ( x ) ] 2 1 + [ f ′ ( x ) ] 2 = c 2 , {\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,} where 0 ≤ c 2 < 1. {\displaystyle 0\leq c^{2}<1.} Solving, we get [ f ′ ( x ) ] 2 = c 2 1 − c 2 {\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}} which implies that f ′ ( x ) = m {\displaystyle f'(x)=m} is a constant and therefore that the shortest curve that connects two points ( x 1 , y 1 ) {\displaystyle \left(x_{1},y_{1}\right)} and ( x 2 , y 2 ) {\displaystyle \left(x_{2},y_{2}\right)} is f ( x ) = m x + b with m = y 2 − y 1 x 2 − x 1 and b = x 2 y 1 − x 1 y 2 x 2 − x 1 {\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}} and we have thus found the extremal function f ( x ) {\displaystyle f(x)} that minimizes the functional A [ y ] {\displaystyle A[y]} so that A [ f ] {\displaystyle A[f]} is a minimum. The equation for a straight line is y = m x + b . {\displaystyle y=mx+b.} In other words, the shortest distance between two points is a straight line. == Beltrami's identity == In physics problems it may be the case that ∂ L ∂ x = 0 , {\displaystyle {\frac {\partial L}{\partial x}}=0,} meaning the integrand is a function of f ( x ) {\displaystyle f(x)} and f ′ ( x ) {\displaystyle f'(x)} but x {\displaystyle x} does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity L − f ′ ∂ L ∂ f ′ = C , {\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,} where C {\displaystyle C} is a constant. The left hand side is the Legendre transformation of L {\displaystyle L} with respect to f ′ ( x ) . {\displaystyle f'(x).} The intuition behind this result is that, if the variable x {\displaystyle x} is actually time, then the statement ∂ L ∂ x = 0 {\displaystyle {\frac {\partial L}{\partial x}}=0} implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity. == Euler–Poisson equation == If S {\displaystyle S} depends on higher-derivatives of y ( x ) {\displaystyle y(x)} , that is, if S = ∫ a b f ( x , y ( x ) , y ′ ( x ) , … , y ( n ) ( x ) ) d x , {\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,} then y {\displaystyle y} must satisfy the Euler–Poisson equation, ∂ f ∂ y − d d x ( ∂ f ∂ y ′ ) + ⋯ + ( − 1 ) n d n d x n [ ∂ f ∂ y ( n ) ] = 0. {\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.} == Du Bois-Reymond's theorem == The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral J {\displaystyle J} requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If L {\displaystyle L} has continuous first and second derivatives with respect to all of its arguments, and if ∂ 2 L ∂ f ′ 2 ≠ 0 , {\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,} then f {\displaystyle f} has two continuous derivatives, and it satisfies the Euler–Lagrange equation. == Lavrentiev phenomenon == Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior. However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934: L [ x ] = ∫ 0 1 ( x 3 − t ) 2 x ′ 6 , {\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},} A = { x ∈ W 1 , 1 ( 0 , 1 ) : x ( 0 ) = 0 , x ( 1 ) = 1 } . {\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.} Clearly, x ( t ) = t 1 3 {\displaystyle x(t)=t^{\frac {1}{3}}} minimizes the functional, but we find any function x ∈ W 1 , ∞ {\displaystyle x\in W^{1,\infty }} gives a value bounded away from the infimum. Examples (in one-dimension) are traditionally manifested across W 1 , 1 {\displaystyle W^{1,1}} and W 1 , ∞ , {\displaystyle W^{1,\infty },} but Ball and Mizel procured the first functional that displayed Lavrentiev's Phenomenon across W 1 , p {\displaystyle W^{1,p}} and W 1 , q {\displaystyle W^{1,q}} for 1 ≤ p < q < ∞ . {\displaystyle 1\leq p<q<\infty .} There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals. Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property. == Functions of several variables == For example, if φ ( x , y ) {\displaystyle \varphi (x,y)} denotes the displacement of a membrane above the domain D {\displaystyle D} in the x , y {\displaystyle x,y} plane, then its potential energy is proportional to its surface area: U [ φ ] = ∬ D 1 + ∇ φ ⋅ ∇ φ d x d y . {\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.} Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of D {\displaystyle D} ; the solutions are called minimal surfaces. The Euler–Lagrange equation for this problem is nonlinear: φ x x ( 1 + φ y 2 ) + φ y y ( 1 + φ x 2 ) − 2 φ x φ y φ x y = 0. {\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.} See Courant (1950) for details. === Dirichlet's principle === It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by V [ φ ] = 1 2 ∬ D ∇ φ ⋅ ∇ φ d x d y . {\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.} The functional V {\displaystyle V} is to be minimized among all trial functions φ {\displaystyle \varphi } that assume prescribed values on the boundary of D {\displaystyle D} . If u {\displaystyle u} is the minimizing function and v {\displaystyle v} is an arbitrary smooth function that vanishes on the boundary of D {\displaystyle D} , then the first variation of V [ u + ε v ] {\displaystyle V[u+\varepsilon v]} must vanish: d d ε V [ u + ε v ] | ε = 0 = ∬ D ∇ u ⋅ ∇ v d x d y = 0. {\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.} Provided that u {\displaystyle u} has two derivatives, we may apply the divergence theorem to obtain ∬ D ∇ ⋅ ( v ∇ u ) d x d y = ∬ D ∇ u ⋅ ∇ v + v ∇ ⋅ ∇ u d x d y = ∫ C v ∂ u ∂ n d s , {\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,} where C {\displaystyle C} is the boundary of D , {\displaystyle D,} s {\displaystyle s} is arclength along C {\displaystyle C} and ∂ u / ∂ n {\displaystyle \partial u/\partial n} is the normal derivative of u {\displaystyle u} on C . {\displaystyle C.} Since v {\displaystyle v} vanishes on C {\displaystyle C} and the first variation vanishes, the result is ∬ D v ∇ ⋅ ∇ u d x d y = 0 {\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0} for all smooth functions v {\displaystyle v} that vanish on the boundary of D {\displaystyle D} . The proof for the case of one dimensional integrals may be adapted to this case to show that ∇ ⋅ ∇ u = 0 {\displaystyle \nabla \cdot \nabla u=0} in D . {\displaystyle D.} The difficulty with this reasoning is the assumption that the minimizing function u {\displaystyle u} must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize W [ φ ] = ∫ − 1 1 ( x φ ′ ) 2 d x {\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx} among all functions φ {\displaystyle \varphi } that satisfy φ ( − 1 ) = − 1 {\displaystyle \varphi (-1)=-1} and φ ( 1 ) = 1. {\displaystyle \varphi (1)=1.} W {\displaystyle W} can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makes W = 0. {\displaystyle W=0.} Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998). === Generalization to other boundary value problems === A more general expression for the potential energy of a membrane is V [ φ ] = ∬ D [ 1 2 ∇ φ ⋅ ∇ φ + f ( x , y ) φ ] d x d y + ∫ C [ 1 2 σ ( s ) φ 2 + g ( s ) φ ] d s . {\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.} This corresponds to an external force density f ( x , y ) {\displaystyle f(x,y)} in D , {\displaystyle D,} an external force g ( s ) {\displaystyle g(s)} on the boundary C , {\displaystyle C,} and elastic forces with modulus σ ( s ) {\displaystyle \sigma (s)} acting on C {\displaystyle C} . The function that minimizes the potential energy with no restriction on its boundary values will be denoted by u {\displaystyle u} . Provided that f {\displaystyle f} and g {\displaystyle g} are continuous, regularity theory implies that the minimizing function u {\displaystyle u} will have two derivatives. In taking the first variation, no boundary condition need be imposed on the increment v {\displaystyle v} . The first variation of V [ u + ε v ] {\displaystyle V[u+\varepsilon v]} is given by ∬ D [ ∇ u ⋅ ∇ v + f v ] d x d y + ∫ C [ σ u v + g v ] d s = 0. {\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.} If we apply the divergence theorem, the result is ∬ D [ − v ∇ ⋅ ∇ u + v f ] d x d y + ∫ C v [ ∂ u ∂ n + σ u + g ] d s = 0. {\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.} If we first set v = 0 {\displaystyle v=0} on C , {\displaystyle C,} the boundary integral vanishes, and we conclude as before that − ∇ ⋅ ∇ u + f = 0 {\displaystyle -\nabla \cdot \nabla u+f=0} in D {\displaystyle D} . Then if we allow v {\displaystyle v} to assume arbitrary boundary values, this implies that u {\displaystyle u} must satisfy the boundary condition ∂ u ∂ n + σ u + g = 0 , {\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,} on C {\displaystyle C} . This boundary condition is a consequence of the minimizing property of u {\displaystyle u} : it is not imposed beforehand. Such conditions are called natural boundary conditions. The preceding reasoning is not valid if σ {\displaystyle \sigma } vanishes identically on C . {\displaystyle C.} In such a case, we could allow a trial function φ ≡ c {\displaystyle \varphi \equiv c} , where c {\displaystyle c} is a constant. For such a trial function, V [ c ] = c [ ∬ D f d x d y + ∫ C g d s ] . {\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].} By appropriate choice of c {\displaystyle c} , V {\displaystyle V} can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless ∬ D f d x d y + ∫ C g d s = 0. {\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.} This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953). == Eigenvalue problems == Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems. === Sturm–Liouville problems === The Sturm–Liouville eigenvalue problem involves a general quadratic form Q [ y ] = ∫ x 1 x 2 [ p ( x ) y ′ ( x ) 2 + q ( x ) y ( x ) 2 ] d x , {\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,} where y {\displaystyle y} is restricted to functions that satisfy the boundary conditions y ( x 1 ) = 0 , y ( x 2 ) = 0. {\displaystyle y(x_{1})=0,\quad y(x_{2})=0.} Let R {\displaystyle R} be a normalization integral R [ y ] = ∫ x 1 x 2 r ( x ) y ( x ) 2 d x . {\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.} The functions p ( x ) {\displaystyle p(x)} and r ( x ) {\displaystyle r(x)} are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratio Q / R {\displaystyle Q/R} among all y {\displaystyle y} satisfying the endpoint conditions, which is equivalent to minimizing Q [ y ] {\displaystyle Q[y]} under the constraint that R [ y ] {\displaystyle R[y]} is constant. It is shown below that the Euler–Lagrange equation for the minimizing u {\displaystyle u} is − ( p u ′ ) ′ + q u − λ r u = 0 , {\displaystyle -(pu')'+qu-\lambda ru=0,} where λ {\displaystyle \lambda } is the quotient λ = Q [ u ] R [ u ] . {\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.} It can be shown (see Gelfand and Fomin 1963) that the minimizing u {\displaystyle u} has two derivatives and satisfies the Euler–Lagrange equation. The associated λ {\displaystyle \lambda } will be denoted by λ 1 {\displaystyle \lambda _{1}} ; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted by u 1 ( x ) {\displaystyle u_{1}(x)} . This variational characterization of eigenvalues leads to the Rayleigh–Ritz method: choose an approximating u {\displaystyle u} as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizing Q {\displaystyle Q} under the additional constraint ∫ x 1 x 2 r ( x ) u 1 ( x ) y ( x ) d x = 0. {\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.} This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. Instead of requiring that y {\displaystyle y} vanish at the endpoints, we may not impose any condition at the endpoints, and set Q [ y ] = ∫ x 1 x 2 [ p ( x ) y ′ ( x ) 2 + q ( x ) y ( x ) 2 ] d x + a 1 y ( x 1 ) 2 + a 2 y ( x 2 ) 2 , {\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},} where a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are arbitrary. If we set y = u + ε v {\displaystyle y=u+\varepsilon v} , the first variation for the ratio Q / R {\displaystyle Q/R} is V 1 = 2 R [ u ] ( ∫ x 1 x 2 [ p ( x ) u ′ ( x ) v ′ ( x ) + q ( x ) u ( x ) v ( x ) − λ r ( x ) u ( x ) v ( x ) ] d x + a 1 u ( x 1 ) v ( x 1 ) + a 2 u ( x 2 ) v ( x 2 ) ) , {\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),} where λ {\displaystyle \lambda } is given by the ratio Q [ u ] / R [ u ] {\displaystyle Q[u]/R[u]} as previously. After integration by parts, R [ u ] 2 V 1 = ∫ x 1 x 2 v ( x ) [ − ( p u ′ ) ′ + q u − λ r u ] d x + v ( x 1 ) [ − p ( x 1 ) u ′ ( x 1 ) + a 1 u ( x 1 ) ] + v ( x 2 ) [ p ( x 2 ) u ′ ( x 2 ) + a 2 u ( x 2 ) ] . {\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].} If we first require that v {\displaystyle v} vanish at the endpoints, the first variation will vanish for all such v {\displaystyle v} only if − ( p u ′ ) ′ + q u − λ r u = 0 for x 1 < x < x 2 . {\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.} If u {\displaystyle u} satisfies this condition, then the first variation will vanish for arbitrary v {\displaystyle v} only if − p ( x 1 ) u ′ ( x 1 ) + a 1 u ( x 1 ) = 0 , and p ( x 2 ) u ′ ( x 2 ) + a 2 u ( x 2 ) = 0. {\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.} These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. === Eigenvalue problems in several dimensions === Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain D {\displaystyle D} with boundary B {\displaystyle B} in three dimensions we may define Q [ φ ] = ∭ D p ( X ) ∇ φ ⋅ ∇ φ + q ( X ) φ 2 d x d y d z + ∬ B σ ( S ) φ 2 d S , {\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,} and R [ φ ] = ∭ D r ( X ) φ ( X ) 2 d x d y d z . {\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.} Let u {\displaystyle u} be the function that minimizes the quotient Q [ φ ] / R [ φ ] {\displaystyle Q[\varphi ]/R[\varphi ]} , with no condition prescribed on the boundary B . {\displaystyle B.} The Euler–Lagrange equation satisfied by u {\displaystyle u} is − ∇ ⋅ ( p ( X ) ∇ u ) + q ( x ) u − λ r ( x ) u = 0 , {\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,} where λ = Q [ u ] R [ u ] . {\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.} The minimizing u {\displaystyle u} must also satisfy the natural boundary condition p ( S ) ∂ u ∂ n + σ ( S ) u = 0 , {\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,} on the boundary B . {\displaystyle B.} This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953). == Applications == === Optics === Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the x {\displaystyle x} -coordinate is chosen as the parameter along the path, and y = f ( x ) {\displaystyle y=f(x)} along the path, then the optical length is given by A [ f ] = ∫ x 0 x 1 n ( x , f ( x ) ) 1 + f ′ ( x ) 2 d x , {\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,} where the refractive index n ( x , y ) {\displaystyle n(x,y)} depends upon the material. If we try f ( x ) = f 0 ( x ) + ε f 1 ( x ) {\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)} then the first variation of A {\displaystyle A} (the derivative of A {\displaystyle A} with respect to ε {\displaystyle \varepsilon } ) is δ A [ f 0 , f 1 ] = ∫ x 0 x 1 [ n ( x , f 0 ) f 0 ′ ( x ) f 1 ′ ( x ) 1 + f 0 ′ ( x ) 2 + n y ( x , f 0 ) f 1 1 + f 0 ′ ( x ) 2 ] d x . {\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.} After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation − d d x [ n ( x , f 0 ) f 0 ′ 1 + f 0 ′ 2 ] + n y ( x , f 0 ) 1 + f 0 ′ ( x ) 2 = 0. {\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.} The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. ==== Snell's law ==== There is a discontinuity of the refractive index when light enters or leaves a lens. Let n ( x , y ) = { n ( − ) if x < 0 , n ( + ) if x > 0 , {\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}} where n ( − ) {\displaystyle n_{(-)}} and n ( + ) {\displaystyle n_{(+)}} are constants. Then the Euler–Lagrange equation holds as before in the region where x < 0 {\displaystyle x<0} or x > 0 {\displaystyle x>0} , and in fact the path is a straight line there, since the refractive index is constant. At the x = 0 {\displaystyle x=0} , f {\displaystyle f} must be continuous, but f ′ {\displaystyle f'} may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form δ A [ f 0 , f 1 ] = f 1 ( 0 ) [ n ( − ) f 0 ′ ( 0 − ) 1 + f 0 ′ ( 0 − ) 2 − n ( + ) f 0 ′ ( 0 + ) 1 + f 0 ′ ( 0 + ) 2 ] . {\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].} The factor multiplying n ( − ) {\displaystyle n_{(-)}} is the sine of angle of the incident ray with the x {\displaystyle x} axis, and the factor multiplying n ( + ) {\displaystyle n_{(+)}} is the sine of angle of the refracted ray with the x {\displaystyle x} axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. ==== Fermat's principle in three dimensions ==== It is expedient to use vector notation: let X = ( x 1 , x 2 , x 3 ) , {\displaystyle X=(x_{1},x_{2},x_{3}),} let t {\displaystyle t} be a parameter, let X ( t ) {\displaystyle X(t)} be the parametric representation of a curve C , {\displaystyle C,} and let X ˙ ( t ) {\displaystyle {\dot {X}}(t)} be its tangent vector. The optical length of the curve is given by A [ C ] = ∫ t 0 t 1 n ( X ) X ˙ ⋅ X ˙ d t . {\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.} Note that this integral is invariant with respect to changes in the parametric representation of C . {\displaystyle C.} The Euler–Lagrange equations for a minimizing curve have the symmetric form d d t P = X ˙ ⋅ X ˙ ∇ n , {\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,} where P = n ( X ) X ˙ X ˙ ⋅ X ˙ . {\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.} It follows from the definition that P {\displaystyle P} satisfies P ⋅ P = n ( X ) 2 . {\displaystyle P\cdot P=n(X)^{2}.} Therefore, the integral may also be written as A [ C ] = ∫ t 0 t 1 P ⋅ X ˙ d t . {\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.} This form suggests that if we can find a function ψ {\displaystyle \psi } whose gradient is given by P , {\displaystyle P,} then the integral A {\displaystyle A} is given by the difference of ψ {\displaystyle \psi } at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of ψ {\displaystyle \psi } . In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. ===== Connection with the wave equation ===== The wave equation for an inhomogeneous medium is u t t = c 2 ∇ ⋅ ∇ u , {\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,} where c {\displaystyle c} is the velocity, which generally depends upon X {\displaystyle X} . Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy φ t 2 = c ( X ) 2 ∇ φ ⋅ ∇ φ . {\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .} We may look for solutions in the form φ ( t , X ) = t − ψ ( X ) . {\displaystyle \varphi (t,X)=t-\psi (X).} In that case, ψ {\displaystyle \psi } satisfies ∇ ψ ⋅ ∇ ψ = n 2 , {\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},} where n = 1 / c {\displaystyle n=1/c} . According to the theory of first-order partial differential equations, if P = ∇ ψ , {\displaystyle P=\nabla \psi ,} then P {\displaystyle P} satisfies d P d s = n ∇ n , {\displaystyle {\frac {dP}{ds}}=n\,\nabla n,} along a system of curves (the light rays) that are given by d X d s = P . {\displaystyle {\frac {dX}{ds}}=P.} These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification d s d t = X ˙ ⋅ X ˙ n . {\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.} We conclude that the function ψ {\displaystyle \psi } is the value of the minimizing integral A {\displaystyle A} as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems. === Mechanics === In classical mechanics, the action, S , {\displaystyle S,} is defined as the time integral of the Lagrangian, L {\displaystyle L} . The Lagrangian is the difference of energies, L = T − U , {\displaystyle L=T-U,} where T {\displaystyle T} is the kinetic energy of a mechanical system and U {\displaystyle U} its potential energy. Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral S = ∫ t 0 t 1 L ( x , x ˙ , t ) d t {\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt} is stationary with respect to variations in the path x ( t ) {\displaystyle x(t)} . The Euler–Lagrange equations for this system are known as Lagrange's equations: d d t ∂ L ∂ x ˙ = ∂ L ∂ x , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},} and they are equivalent to Newton's equations of motion (for such systems). The conjugate momenta P {\displaystyle P} are defined by p = ∂ L ∂ x ˙ . {\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.} For example, if T = 1 2 m x ˙ 2 , {\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},} then p = m x ˙ . {\displaystyle p=m{\dot {x}}.} Hamiltonian mechanics results if the conjugate momenta are introduced in place of x ˙ {\displaystyle {\dot {x}}} by a Legendre transformation of the Lagrangian L {\displaystyle L} into the Hamiltonian H {\displaystyle H} defined by H ( x , p , t ) = p x ˙ − L ( x , x ˙ , t ) . {\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).} The Hamiltonian is the total energy of the system: H = T + U {\displaystyle H=T+U} . Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of X {\displaystyle X} . This function is a solution of the Hamilton–Jacobi equation: ∂ ψ ∂ t + H ( x , ∂ ψ ∂ x , t ) = 0. {\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.} === Further applications === Further applications of the calculus of variations include the following: The derivation of the catenary shape Solution to Newton's minimal resistance problem Solution to the brachistochrone problem Solution to the tautochrone problem Solution to isoperimetric problems Calculating geodesics Finding minimal surfaces and solving Plateau's problem Optimal control Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics; Geometric optics, especially Lagrangian and Hamiltonian optics; Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states; Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning; Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity; Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations; Total variation denoising, an image processing method for filtering high variance or noisy signals. == Variations and sufficient condition for a minimum == Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The first variation is defined as the linear part of the change in the functional, and the second variation is defined as the quadratic part. For example, if J [ y ] {\displaystyle J[y]} is a functional with the function y = y ( x ) {\displaystyle y=y(x)} as its argument, and there is a small change in its argument from y {\displaystyle y} to y + h , {\displaystyle y+h,} where h = h ( x ) {\displaystyle h=h(x)} is a function in the same function space as y {\displaystyle y} , then the corresponding change in the functional is Δ J [ h ] = J [ y + h ] − J [ y ] . {\displaystyle \Delta J[h]=J[y+h]-J[y].} The functional J [ y ] {\displaystyle J[y]} is said to be differentiable if Δ J [ h ] = φ [ h ] + ε ‖ h ‖ , {\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,} where φ [ h ] {\displaystyle \varphi [h]} is a linear functional, ‖ h ‖ {\displaystyle \|h\|} is the norm of h , {\displaystyle h,} and ε → 0 {\displaystyle \varepsilon \to 0} as ‖ h ‖ → 0. {\displaystyle \|h\|\to 0.} The linear functional φ [ h ] {\displaystyle \varphi [h]} is the first variation of J [ y ] {\displaystyle J[y]} and is denoted by, δ J [ h ] = φ [ h ] . {\displaystyle \delta J[h]=\varphi [h].} The functional J [ y ] {\displaystyle J[y]} is said to be twice differentiable if Δ J [ h ] = φ 1 [ h ] + φ 2 [ h ] + ε ‖ h ‖ 2 , {\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},} where φ 1 [ h ] {\displaystyle \varphi _{1}[h]} is a linear functional (the first variation), φ 2 [ h ] {\displaystyle \varphi _{2}[h]} is a quadratic functional, and ε → 0 {\displaystyle \varepsilon \to 0} as ‖ h ‖ → 0. {\displaystyle \|h\|\to 0.} The quadratic functional φ 2 [ h ] {\displaystyle \varphi _{2}[h]} is the second variation of J [ y ] {\displaystyle J[y]} and is denoted by, δ 2 J [ h ] = φ 2 [ h ] . {\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].} The second variation δ 2 J [ h ] {\displaystyle \delta ^{2}J[h]} is said to be strongly positive if δ 2 J [ h ] ≥ k ‖ h ‖ 2 , {\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},} for all h {\displaystyle h} and for some constant k > 0 {\displaystyle k>0} . Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated. == See also == == Notes == == References == == Further reading == Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review 59(4) (2017), 703–766. Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4. Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968. Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950. Dacorogna, Bernard: "Introduction" Introduction to the Calculus of Variations, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0. Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962. Forsyth, A.R.: Calculus of Variations, Dover, 1960. Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987. Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2 Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998. Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98. Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006 Pike, Ralph W. "Chapter 8: Calculus of Variations". Optimization for Engineering Systems. Louisiana State University. Archived from the original on 2007-07-05. Roubicek, T.: "Calculus of variations". Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992. Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.). == External links == Variational calculus. Encyclopedia of Mathematics. calculus of variations. PlanetMath. Calculus of Variations. MathWorld. Calculus of variations. Example problems. Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube. Selected papers on Geodesic Fields. Part I, Part II.
Wikipedia/Variational_method
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector v {\displaystyle \mathbf {v} } of a linear transformation T {\displaystyle T} is scaled by a constant factor λ {\displaystyle \lambda } when the linear transformation is applied to it: T v = λ v {\displaystyle T\mathbf {v} =\lambda \mathbf {v} } . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor λ {\displaystyle \lambda } (possibly negative). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system. == Matrices == For an n × n {\displaystyle n{\times }n} matrix A and a nonzero vector v {\displaystyle \mathbf {v} } of length n {\displaystyle n} , if multiplying A by v {\displaystyle \mathbf {v} } (denoted A v {\displaystyle A\mathbf {v} } ) simply scales v {\displaystyle \mathbf {v} } by a factor λ, where λ is a scalar, then v {\displaystyle \mathbf {v} } is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as: A v = λ v {\displaystyle A\mathbf {v} =\lambda \mathbf {v} } . Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices. == Overview == Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation T ( v ) = λ v , {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,} referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like d d x {\displaystyle {\tfrac {d}{dx}}} , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x . {\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.} Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication A v = λ v , {\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,} where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation. The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue. If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis. == History == Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation. Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961. == Eigenvalues and eigenvectors of matrices == Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors x = [ 1 − 3 4 ] and y = [ − 20 60 − 80 ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that x = λ y . {\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case, λ = − 1 20 {\displaystyle \lambda =-{\frac {1}{20}}} . Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A, A v = w , {\displaystyle A\mathbf {v} =\mathbf {w} ,} or [ A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ] [ v 1 v 2 ⋮ v n ] = [ w 1 w 2 ⋮ w n ] {\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}} where, for each row, w i = A i 1 v 1 + A i 2 v 2 + ⋯ + A i n v n = ∑ j = 1 n A i j v j . {\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs that v and w are scalar multiples, that is if then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A. Equation (1) can be stated equivalently as where I is the n by n identity matrix and 0 is the zero vector. === Eigenvalues and the characteristic polynomial === Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of (A − λI), the characteristic polynomial of A is det ( A − λ I ) = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation ( A − λ I ) v = 0 {\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} } . In this example, the eigenvectors are any nonzero scalar multiples of v λ = 1 = [ 1 − 1 ] , v λ = 3 = [ 1 1 ] . {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. === Spectrum of a matrix === The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix. === Algebraic multiplicity === Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial. Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, det ( A − λ I ) = ( λ 1 − λ ) μ A ( λ 1 ) ( λ 2 − λ ) μ A ( λ 2 ) ⋯ ( λ d − λ ) μ A ( λ d ) . {\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as 1 ≤ μ A ( λ i ) ≤ n , μ A = ∑ i = 1 d μ A ( λ i ) = n . {\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. === Eigenspaces, geometric multiplicity, and the eigenbasis for matrices === Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2), E = { v : ( A − λ I ) v = 0 } . {\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of C n {\displaystyle \mathbb {C} ^{n}} . Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as γ A ( λ ) = n − rank ⁡ ( A − λ I ) . {\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. 1 ≤ γ A ( λ ) ≤ μ A ( λ ) ≤ n {\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequality γ A ( λ ) ≤ μ A ( λ ) {\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )} , consider how the definition of geometric multiplicity implies the existence of γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} orthonormal eigenvectors v 1 , … , v γ A ( λ ) {\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}} , such that A v k = λ v k {\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}} . We can therefore find a (unitary) matrix V whose first γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} columns are these eigenvectors, and whose remaining columns can be any orthonormal set of n − γ A ( λ ) {\displaystyle n-\gamma _{A}(\lambda )} vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating D := V T A V {\displaystyle D:=V^{T}AV} , we get a matrix whose top left block is the diagonal matrix λ I γ A ( λ ) {\displaystyle \lambda I_{\gamma _{A}(\lambda )}} . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding − ξ V {\displaystyle -\xi V} on both sides, we get ( A − ξ I ) V = V ( D − ξ I ) {\displaystyle (A-\xi I)V=V(D-\xi I)} since I commutes with V. In other words, A − ξ I {\displaystyle A-\xi I} is similar to D − ξ I {\displaystyle D-\xi I} , and det ( A − ξ I ) = det ( D − ξ I ) {\displaystyle \det(A-\xi I)=\det(D-\xi I)} . But from the definition of D, we know that det ( D − ξ I ) {\displaystyle \det(D-\xi I)} contains a factor ( ξ − λ ) γ A ( λ ) {\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}} , which means that the algebraic multiplicity of λ {\displaystyle \lambda } must satisfy μ A ( λ ) ≥ γ A ( λ ) {\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )} . Suppose A has d ≤ n {\displaystyle d\leq n} distinct eigenvalues λ 1 , … , λ d {\displaystyle \lambda _{1},\ldots ,\lambda _{d}} , where the geometric multiplicity of λ i {\displaystyle \lambda _{i}} is γ A ( λ i ) {\displaystyle \gamma _{A}(\lambda _{i})} . The total geometric multiplicity of A, γ A = ∑ i = 1 d γ A ( λ i ) , d ≤ γ A ≤ n , {\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}} is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γ A = n {\displaystyle \gamma _{A}=n} , then The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space C n {\displaystyle \mathbb {C} ^{n}} . A basis of C n {\displaystyle \mathbb {C} ^{n}} can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis Any vector in C n {\displaystyle \mathbb {C} ^{n}} can be written as a linear combination of eigenvectors of A. === Additional properties === Let A {\displaystyle A} be an arbitrary n × n {\displaystyle n\times n} matrix of complex numbers with eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . Each eigenvalue appears μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} times in this list, where μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: The trace of A {\displaystyle A} , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, tr ⁡ ( A ) = ∑ i = 1 n a i i = ∑ i = 1 n λ i = λ 1 + λ 2 + ⋯ + λ n . {\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.} The determinant of A {\displaystyle A} is the product of all its eigenvalues, det ( A ) = ∏ i = 1 n λ i = λ 1 λ 2 ⋯ λ n . {\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.} The eigenvalues of the k {\displaystyle k} th power of A {\displaystyle A} ; i.e., the eigenvalues of A k {\displaystyle A^{k}} , for any positive integer k {\displaystyle k} , are λ 1 k , … , λ n k {\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}} . The matrix A {\displaystyle A} is invertible if and only if every eigenvalue is nonzero. If A {\displaystyle A} is invertible, then the eigenvalues of A − 1 {\displaystyle A^{-1}} are 1 λ 1 , … , 1 λ n {\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}} and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. If A {\displaystyle A} is equal to its conjugate transpose A ∗ {\displaystyle A^{*}} , or equivalently if A {\displaystyle A} is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. If A {\displaystyle A} is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. If A {\displaystyle A} is unitary, every eigenvalue has absolute value | λ i | = 1 {\displaystyle |\lambda _{i}|=1} . If A {\displaystyle A} is a n × n {\displaystyle n\times n} matrix and { λ 1 , … , λ k } {\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}} are its eigenvalues, then the eigenvalues of matrix I + A {\displaystyle I+A} (where I {\displaystyle I} is the identity matrix) are { λ 1 + 1 , … , λ k + 1 } {\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}} . Moreover, if α ∈ C {\displaystyle \alpha \in \mathbb {C} } , the eigenvalues of α I + A {\displaystyle \alpha I+A} are { λ 1 + α , … , λ k + α } {\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}} . More generally, for a polynomial P {\displaystyle P} the eigenvalues of matrix P ( A ) {\displaystyle P(A)} are { P ( λ 1 ) , … , P ( λ k ) } {\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}} . === Left and right eigenvectors === Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the n × n {\displaystyle n\times n} matrix A {\displaystyle A} in the defining equation, equation (1), A v = λ v . {\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A {\displaystyle A} . In this formulation, the defining equation is u A = κ u , {\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} where κ {\displaystyle \kappa } is a scalar and u {\displaystyle u} is a 1 × n {\displaystyle 1\times n} matrix. Any row vector u {\displaystyle u} satisfying this equation is called a left eigenvector of A {\displaystyle A} and κ {\displaystyle \kappa } is its associated eigenvalue. Taking the transpose of this equation, A T u T = κ u T . {\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector of A {\displaystyle A} is the same as the transpose of a right eigenvector of A T {\displaystyle A^{\textsf {T}}} , with the same eigenvalue. Furthermore, since the characteristic polynomial of A T {\displaystyle A^{\textsf {T}}} is the same as the characteristic polynomial of A {\displaystyle A} , the left and right eigenvectors of A {\displaystyle A} are associated with the same eigenvalues. === Diagonalization and the eigendecomposition === Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Q = [ v 1 v 2 ⋯ v n ] . {\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.} Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, A Q = [ λ 1 v 1 λ 2 v 2 ⋯ λ n v n ] . {\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.} With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then A Q = Q Λ . {\displaystyle AQ=Q\Lambda .} Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, A = Q Λ Q − 1 , {\displaystyle A=Q\Lambda Q^{-1},} or by instead left multiplying both sides by Q−1, Q − 1 A Q = Λ . {\displaystyle Q^{-1}AQ=\Lambda .} A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. === Variational characterization === In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of H {\displaystyle H} is the maximum value of the quadratic form x T H x / x T x {\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} } . A value of x {\displaystyle \mathbf {x} } that realizes that maximum is an eigenvector. === Matrix examples === ==== Two-dimensional matrix example ==== Consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, det ( A − λ I ) = | [ 2 1 1 2 ] − λ [ 1 0 0 1 ] | = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 = ( λ − 3 ) ( λ − 1 ) . {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. For λ=1, equation (2) becomes, ( A − I ) v λ = 1 = [ 1 1 1 1 ] [ v 1 v 2 ] = [ 0 0 ] {\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}} 1 v 1 + 1 v 2 = 0 {\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector with v1 = −v2 solves this equation. Therefore, v λ = 1 = [ v 1 − v 1 ] = [ 1 − 1 ] {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For λ=3, equation (2) becomes ( A − 3 I ) v λ = 3 = [ − 1 1 1 − 1 ] [ v 1 v 2 ] = [ 0 0 ] − 1 v 1 + 1 v 2 = 0 ; 1 v 1 − 1 v 2 = 0 {\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector with v1 = v2 solves this equation. Therefore, v λ = 3 = [ v 1 v 1 ] = [ 1 1 ] {\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively. ==== Three-dimensional matrix example ==== Consider the matrix A = [ 2 0 0 0 3 4 0 4 9 ] . {\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = | [ 2 0 0 0 3 4 0 4 9 ] − λ [ 1 0 0 0 1 0 0 0 1 ] | = | 2 − λ 0 0 0 3 − λ 4 0 4 9 − λ | , = ( 2 − λ ) [ ( 3 − λ ) ( 9 − λ ) − 16 ] = − λ 3 + 14 λ 2 − 35 λ + 22. {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors [ 1 0 0 ] T {\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}} , [ 0 − 2 1 ] T {\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}} , and [ 0 1 2 ] T {\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}} , or any nonzero multiple thereof. ==== Three-dimensional matrix example with complex eigenvalues ==== Consider the cyclic permutation matrix A = [ 0 1 0 0 0 1 1 0 0 ] . {\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are λ 1 = 1 λ 2 = − 1 2 + i 3 2 λ 3 = λ 2 ∗ = − 1 2 − i 3 2 {\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}} where i {\displaystyle i} is an imaginary unit with i 2 = − 1 {\displaystyle i^{2}=-1} . For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example, A [ 5 5 5 ] = [ 5 5 5 ] = 1 ⋅ [ 5 5 5 ] . {\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues, λ 2 λ 3 = 1 , λ 2 2 = λ 3 , λ 3 2 = λ 2 . {\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} Then A [ 1 λ 2 λ 3 ] = [ λ 2 λ 3 1 ] = λ 2 ⋅ [ 1 λ 2 λ 3 ] , {\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},} and A [ 1 λ 3 λ 2 ] = [ λ 3 λ 2 1 ] = λ 3 ⋅ [ 1 λ 3 λ 2 ] . {\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors of A are complex and are v λ 2 = [ 1 λ 2 λ 3 ] T {\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}} and v λ 3 = [ 1 λ 3 λ 2 ] T {\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}} with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair, v λ 2 = v λ 3 ∗ . {\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} ==== Diagonal matrix example ==== Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix A = [ 1 0 0 0 2 0 0 0 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 0 0 ] , v λ 2 = [ 0 1 0 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Triangular matrix example ==== A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, A = [ 1 0 0 1 2 0 2 3 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. These eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 − 1 1 2 ] , v λ 2 = [ 0 1 − 3 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Matrix with repeated eigenvalues example ==== As in the previous example, the lower triangular matrix A = [ 2 0 0 0 1 2 0 0 0 1 3 0 0 0 1 3 ] , {\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},} has a characteristic polynomial that is the product of its diagonal elements, det ( A − λ I ) = | 2 − λ 0 0 0 1 2 − λ 0 0 0 1 3 − λ 0 0 0 1 3 − λ | = ( 2 − λ ) 2 ( 3 − λ ) 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [ 0 1 − 1 1 ] T {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [ 0 0 0 1 ] T {\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}} . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. === Eigenvector-eigenvalue identity === For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, | v i , j | 2 = ∏ k ( λ i − λ k ( M j ) ) ∏ k ≠ i ( λ i − λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} where M j {\textstyle M_{j}} is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature. == Eigenvalues and eigenfunctions of differential operators == The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation D f ( t ) = λ f ( t ) {\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. === Derivative operator example === Consider the derivative operator d d t {\displaystyle {\tfrac {d}{dt}}} with eigenvalue equation d d t f ( t ) = λ f ( t ) . {\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function f ( t ) = f ( 0 ) e λ t , {\displaystyle f(t)=f(0)e^{\lambda t},} is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. == General definition == The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, T : V → V . {\displaystyle T:V\to V.} We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v. === Eigenspaces, geometric multiplicity, and the eigenbasis === Given an eigenvalue λ, consider the set E = { v : T ( v ) = λ v } , {\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, T ( x + y ) = T ( x ) + T ( y ) , T ( α x ) = α T ( x ) , {\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then T ( u + v ) = λ ( u + v ) , T ( α v ) = λ ( α v ) . {\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues. Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. === Spectral theory === If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. === Associative algebras and representation theory === One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence. == Dynamic equations == The simplest difference equations have the form x t = a 1 x t − 1 + a 2 x t − 2 + ⋯ + a k x t − k . {\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.} The solution of this equation for x in terms of t is found by using its characteristic equation λ k − a 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 , {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,} which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations x t − 1 = x t − 1 , … , x t − k + 1 = x t − k + 1 , {\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},} giving a k-dimensional system of the first order in the stacked variable vector [ x t ⋯ x t − k + 1 ] {\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}} in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots λ 1 , … , λ k , {\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},} for use in the solution equation x t = c 1 λ 1 t + ⋯ + c k λ k t . {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.} A similar procedure is used for solving a differential equation of the form d k x d t k + a k − 1 d k − 1 x d t k − 1 + ⋯ + a 1 d x d t + a 0 x = 0. {\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.} == Calculation == The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. === Classical method === The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point. ==== Eigenvalues ==== The eigenvalues of a matrix A {\displaystyle A} can be determined by finding the roots of the characteristic polynomial. This is easy for 2 × 2 {\displaystyle 2\times 2} matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an n × n {\displaystyle n\times n} matrix is a sum of n ! {\displaystyle n!} different products. Explicit algebraic formulas for the roots of a polynomial exist only if the degree n {\displaystyle n} is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree n {\displaystyle n} is the characteristic polynomial of some companion matrix of order n {\displaystyle n} .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. ==== Eigenvectors ==== Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix A = [ 4 1 6 3 ] {\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equation A v = 6 v {\displaystyle Av=6v} , that is [ 4 1 6 3 ] [ x y ] = 6 ⋅ [ x y ] {\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to two linear equations { 4 x + y = 6 x 6 x + 3 y = 6 y {\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.} that is { − 2 x + y = 0 6 x − 3 y = 0 {\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equation y = 2 x {\displaystyle y=2x} . Therefore, any vector of the form [ a 2 a ] T {\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number a {\displaystyle a} , is an eigenvector of A {\displaystyle A} with eigenvalue λ = 6 {\displaystyle \lambda =6} . The matrix A {\displaystyle A} above has another eigenvalue λ = 1 {\displaystyle \lambda =1} . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of 3 x + y = 0 {\displaystyle 3x+y=0} , that is, any vector of the form [ b − 3 b ] T {\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number b {\displaystyle b} . === Simple iterative methods === The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} ; this causes it to converge to an eigenvector of the eigenvalue closest to μ ∈ C {\displaystyle \mu \in \mathbb {C} } . If v {\displaystyle \mathbf {v} } is (a good approximation of) an eigenvector of A {\displaystyle A} , then the corresponding eigenvalue can be computed as λ = v ∗ A v v ∗ v {\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}} where v ∗ {\displaystyle \mathbf {v} ^{*}} denotes the conjugate transpose of v {\displaystyle \mathbf {v} } . === Modern methods === Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities. Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. == Applications == === Geometric transformations === Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is a quadratic equation with discriminant D = − 4 ( sin ⁡ θ ) 2 {\displaystyle D=-4(\sin \theta )^{2}} , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, cos ⁡ θ ± i sin ⁡ θ {\displaystyle \cos \theta \pm i\sin \theta } ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. === Principal component analysis === The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling. === Graphs === In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A {\displaystyle A} , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either D − A {\displaystyle D-A} (sometimes called the combinatorial Laplacian) or I − D − 1 / 2 A D − 1 / 2 {\displaystyle I-D^{-1/2}AD^{-1/2}} (sometimes called the normalized Laplacian), where D {\displaystyle D} is a diagonal matrix with D i i {\displaystyle D_{ii}} equal to the degree of vertex v i {\displaystyle v_{i}} , and in D − 1 / 2 {\displaystyle D^{-1/2}} , the i {\displaystyle i} th diagonal entry is 1 / deg ⁡ ( v i ) {\textstyle 1/{\sqrt {\deg(v_{i})}}} . The k {\displaystyle k} th principal eigenvector of a graph is defined as either the eigenvector corresponding to the k {\displaystyle k} th largest or k {\displaystyle k} th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. === Markov chains === A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. === Vibration analysis === Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by m x ¨ + k x = 0 {\displaystyle m{\ddot {x}}+kx=0} or m x ¨ = − k x {\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expect x {\displaystyle x} to be sinusoidal in time). In n {\displaystyle n} dimensions, m {\displaystyle m} becomes a mass matrix and k {\displaystyle k} a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem k x = ω 2 m x {\displaystyle kx=\omega ^{2}mx} where ω 2 {\displaystyle \omega ^{2}} is the eigenvalue and ω {\displaystyle \omega } is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k {\displaystyle k} alone. Furthermore, damped vibration, governed by m x ¨ + c x ˙ + k x = 0 {\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0} leads to a so-called quadratic eigenvalue problem, ( ω 2 m + ω c + k ) x = 0. {\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. === Tensor of moment of inertia === In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass. === Stress tensor === In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. === Schrödinger equation === An example of an eigenvalue equation where the transformation T {\displaystyle T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H ψ E = E ψ E {\displaystyle H\psi _{E}=E\psi _{E}\,} where H {\displaystyle H} , the Hamiltonian, is a second-order differential operator and ψ E {\displaystyle \psi _{E}} , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E {\displaystyle E} , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψ E {\displaystyle \psi _{E}} within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψ E {\displaystyle \psi _{E}} and H {\displaystyle H} can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . In this notation, the Schrödinger equation is: H | Ψ E ⟩ = E | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle } where | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } is an eigenstate of H {\displaystyle H} and E {\displaystyle E} represents the eigenvalue. H {\displaystyle H} is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above H | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle } is understood to be the vector obtained by application of the transformation H {\displaystyle H} to | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . === Wave transport === Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix t {\displaystyle \mathbf {t} } . The eigenvectors of the transmission operator t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, τ {\displaystyle \tau } , of t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with τ max = 1 {\displaystyle \tau _{\max }=1} and τ min = 0 {\displaystyle \tau _{\min }=0} . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels. === Molecular orbitals === In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. === Geology and glaciology === In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v 1 , v 2 , v 3 {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}} by their eigenvalues E 1 ≥ E 2 ≥ E 3 {\displaystyle E_{1}\geq E_{2}\geq E_{3}} ; v 1 {\displaystyle \mathbf {v} _{1}} then is the primary orientation/dip of clast, v 2 {\displaystyle \mathbf {v} _{2}} is the secondary and v 3 {\displaystyle \mathbf {v} _{3}} is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E 1 {\displaystyle E_{1}} , E 2 {\displaystyle E_{2}} , and E 3 {\displaystyle E_{3}} are dictated by the nature of the sediment's fabric. If E 1 = E 2 = E 3 {\displaystyle E_{1}=E_{2}=E_{3}} , the fabric is said to be isotropic. If E 1 = E 2 > E 3 {\displaystyle E_{1}=E_{2}>E_{3}} , the fabric is said to be planar. If E 1 > E 2 > E 3 {\displaystyle E_{1}>E_{2}>E_{3}} , the fabric is said to be linear. === Basic reproduction number === The basic reproduction number ( R 0 {\displaystyle R_{0}} ) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R 0 {\displaystyle R_{0}} is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, t G {\displaystyle t_{G}} , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time t G {\displaystyle t_{G}} has passed. The value R 0 {\displaystyle R_{0}} is then the largest eigenvalue of the next generation matrix. === Eigenfaces === In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. == See also == Antieigenvalue theory Eigenoperator Eigenplane Eigenmoments Eigenvalue algorithm Quantum states Jordan normal form List of numerical-analysis software Nonlinear eigenproblem Normal eigenvalue Quadratic eigenvalue problem Singular value Spectrum of a matrix == Notes == === Citations === == Sources == == Further reading == == External links == What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts" Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu. Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an n × n {\displaystyle n\times n} size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.) Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors === Theory === Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst
Wikipedia/Eigenvalue_problem
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In regular perturbation theory, the solution is expressed as a power series in a small parameter ε {\displaystyle \varepsilon } . The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of ε {\displaystyle \varepsilon } usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines. == Description == Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution A , {\displaystyle \ A\ ,} a series in the small parameter (here called ε), like the following: A ≡ A 0 + ε 1 A 1 + ε 2 A 2 + ε 3 A 3 + ⋯ {\displaystyle A\equiv A_{0}+\varepsilon ^{1}A_{1}+\varepsilon ^{2}A_{2}+\varepsilon ^{3}A_{3}+\cdots } In this example, A 0 {\displaystyle \ A_{0}\ } would be the known solution to the exactly solvable initial problem, and the terms A 1 , A 2 , A 3 , … {\displaystyle \ A_{1},A_{2},A_{3},\ldots \ } represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small ε {\displaystyle \ \varepsilon \ } these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction A → A 0 + ε A 1 f o r ε → 0 {\displaystyle A\to A_{0}+\varepsilon A_{1}\qquad {\mathsf {for}}\qquad \varepsilon \to 0} Some authors use big O notation to indicate the order of the error in the approximate solution: A = A 0 + ε A 1 + O ( ε 2 ) . {\displaystyle \;A=A_{0}+\varepsilon A_{1}+{\mathcal {O}}{\bigl (}\ \varepsilon ^{2}\ {\bigr )}~.} If the power series in ε {\displaystyle \ \varepsilon \ } converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers ε ( 1 / 2 ) {\displaystyle \ \varepsilon ^{\left(1/2\right)}\ } or negative powers ε − 2 {\displaystyle \ \varepsilon ^{-2}\ } ) then the perturbation problem is called a singular perturbation problem. Many special techniques in perturbation theory have been developed to analyze singular perturbation problems. == Prototypical example == The earliest use of what would now be called perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun. Perturbation methods start with a simplified form of the original problem, which is simple enough to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity. == Perturbative expansion == Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write D {\displaystyle \ D\ } for this collection of equations; that is, let the symbol D {\displaystyle \ D\ } stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D". The process is generally mechanical, if laborious. One begins by writing the equations D {\displaystyle \ D\ } so that they split into two parts: some collection of equations D 0 {\displaystyle \ D_{0}\ } which can be solved exactly, and some additional remaining part ε D 1 {\displaystyle \ \varepsilon D_{1}\ } for some small ε ≪ 1 . {\displaystyle \ \varepsilon \ll 1~.} The solution A 0 {\displaystyle \ A_{0}\ } (to D 0 {\displaystyle \ D_{0}\ } ) is known, and one seeks the general solution A {\displaystyle \ A\ } to D = D 0 + ε D 1 . {\displaystyle \ D=D_{0}+\varepsilon D_{1}~.} Next the approximation A ≈ A 0 + ε A 1 {\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ } is inserted into ε D 1 {\displaystyle \ \varepsilon D_{1}} . This results in an equation for A 1 , {\displaystyle \ A_{1}\ ,} which, in the general case, can be written in closed form as a sum over integrals over A 0 . {\displaystyle \ A_{0}~.} Thus, one has obtained the first-order correction A 1 {\displaystyle \ A_{1}\ } and thus A ≈ A 0 + ε A 1 {\displaystyle \ A\approx A_{0}+\varepsilon A_{1}\ } is a good approximation to A . {\displaystyle \ A~.} It is a good approximation, precisely because the parts that were ignored were of size ε 2 . {\displaystyle \ \varepsilon ^{2}~.} The process can then be repeated, to obtain corrections A 2 , {\displaystyle \ A_{2}\ ,} and so on. In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache." This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs in quantum mechanics for controlling the expansion are the Feynman diagrams, which allow quantum mechanical perturbation series to be represented by a sketch. == Examples == Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations" D {\displaystyle D} include algebraic equations, differential equations (e.g., the equations of motion and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer, and Hamiltonian operators in quantum mechanics. Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., the trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization), and the ground state energy of a quantum mechanical problem. Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom). Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy. For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams. === In chemistry === Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method. === Shell-crossing === A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity. This limits the predictive power of physical simulations at small scales. == History == Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Kepler's orbital equations only solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notably Joseph-Louis Lagrange and Pierre-Simon Laplace, to extend and generalize the methods of perturbation theory. These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule. Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom. Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile). In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions. The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the small denominator problem or small divisor problem. In the 19th century Poincaré observed (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general form ψ n V ϕ m ( ω n − ω m ) {\displaystyle \ {\frac {\ \psi _{n}V\phi _{m}\ }{\ (\omega _{n}-\omega _{m})\ }}\ } where ψ n , {\displaystyle \ \psi _{n}\ ,} V , {\displaystyle \ V\ ,} and ϕ m {\displaystyle \ \phi _{m}\ } are some complicated expressions pertinent to the problem to be solved, and ω n {\displaystyle \ \omega _{n}\ } and ω m {\displaystyle \ \omega _{m}\ } are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference ω n − ω m {\displaystyle \ \omega _{n}-\omega _{m}\ } is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is an asymptotic series: A useful approximation for a few terms, but at some point becomes less accurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other. === Beginnings in the study of planetary motion === Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun. Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory". Perturbation theory was investigated by the classical scholars – Laplace, Siméon Denis Poisson, Carl Friedrich Gauss – as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus. He sent the coordinates to J.G. Galle who successfully observed Neptune through his telescope – a triumph of perturbation theory. == Perturbation orders == The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate. == See also == == References == == External links == van den Eijnden, Eric. "Introduction to regular perturbation theory" (PDF). Archived (PDF) from the original on 2004-09-20. Chow, Carson C. (23 October 2007). "Perturbation method of multiple scales". Scholarpedia. 2 (10): 1617. doi:10.4249/scholarpedia.1617. Alternative approach to quantum perturbation theory Martínez-Carranza, J.; Soto-Eguibar, F.; Moya-Cessa, H. (2012). "Alternative analysis to perturbation theory in quantum mechanics". The European Physical Journal D. 66 (1): 22. arXiv:1110.0723. Bibcode:2012EPJD...66...22M. doi:10.1140/epjd/e2011-20654-5. S2CID 117362666.
Wikipedia/Perturbation_methods
In mathematics, the inverse problem for Lagrangian mechanics is the problem of determining whether a given system of ordinary differential equations can arise as the Euler–Lagrange equations for some Lagrangian function. There has been a great deal of activity in the study of this problem since the early 20th century. A notable advance in this field was a 1941 paper by the American mathematician Jesse Douglas, in which he provided necessary and sufficient conditions for the problem to have a solution; these conditions are now known as the Helmholtz conditions, after the German physicist Hermann von Helmholtz. == Background and statement of the problem == The usual set-up of Lagrangian mechanics on n-dimensional Euclidean space Rn is as follows. Consider a differentiable path u : [0, T] → Rn. The action of the path u, denoted S(u), is given by S ( u ) = ∫ 0 T L ( t , u ( t ) , u ˙ ( t ) ) d t , {\displaystyle S(u)=\int _{0}^{T}L(t,u(t),{\dot {u}}(t))\,\mathrm {d} t,} where L is a function of time, position and velocity known as the Lagrangian. The principle of least action states that, given an initial state x0 and a final state x1 in Rn, the trajectory that the system determined by L will actually follow must be a minimizer of the action functional S satisfying the boundary conditions u(0) = x0, u(T) = x1. Furthermore, the critical points (and hence minimizers) of S must satisfy the Euler–Lagrange equations for S: d d t ∂ L ∂ u ˙ i − ∂ L ∂ u i = 0 for 1 ≤ i ≤ n , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {u}}^{i}}}-{\frac {\partial L}{\partial u^{i}}}=0\quad {\text{for }}1\leq i\leq n,} where the upper indices i denote the components of u = (u1, ..., un). In the classical case T ( u ˙ ) = 1 2 m | u ˙ | 2 , {\displaystyle T({\dot {u}})={\frac {1}{2}}m|{\dot {u}}|^{2},} V : [ 0 , T ] × R n → R , {\displaystyle V:[0,T]\times \mathbb {R} ^{n}\to \mathbb {R} ,} L ( t , u , u ˙ ) = T ( u ˙ ) − V ( t , u ) , {\displaystyle L(t,u,{\dot {u}})=T({\dot {u}})-V(t,u),} the Euler–Lagrange equations are the second-order ordinary differential equations better known as Newton's laws of motion: m u ¨ i = − ∂ V ( t , u ) ∂ u i for 1 ≤ i ≤ n , {\displaystyle m{\ddot {u}}^{i}=-{\frac {\partial V(t,u)}{\partial u^{i}}}\quad {\text{for }}1\leq i\leq n,} i.e. m u ¨ = − ∇ u V ( t , u ) . {\displaystyle {\mbox{i.e. }}m{\ddot {u}}=-\nabla _{u}V(t,u).} The inverse problem of Lagrangian mechanics is as follows: given a system of second-order ordinary differential equations u ¨ i = f i ( u j , u ˙ j ) for 1 ≤ i , j ≤ n , (E) {\displaystyle {\ddot {u}}^{i}=f^{i}(u^{j},{\dot {u}}^{j})\quad {\text{for }}1\leq i,j\leq n,\quad {\mbox{(E)}}} that holds for times 0 ≤ t ≤ T, does there exist a Lagrangian L : [0, T] × Rn × Rn → R for which these ordinary differential equations (E) are the Euler–Lagrange equations? In general, this problem is posed not on Euclidean space Rn, but on an n-dimensional manifold M, and the Lagrangian is a function L : [0, T] × TM → R, where TM denotes the tangent bundle of M. == Douglas' theorem and the Helmholtz conditions == To simplify the notation, let v i = u ˙ i {\displaystyle v^{i}={\dot {u}}^{i}} and define a collection of n2 functions Φji by Φ j i = 1 2 d d t ∂ f i ∂ v j − ∂ f i ∂ u j − 1 4 ∂ f i ∂ v k ∂ f k ∂ v j . {\displaystyle \Phi _{j}^{i}={\frac {1}{2}}{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial f^{i}}{\partial v^{j}}}-{\frac {\partial f^{i}}{\partial u^{j}}}-{\frac {1}{4}}{\frac {\partial f^{i}}{\partial v^{k}}}{\frac {\partial f^{k}}{\partial v^{j}}}.} Theorem. (Douglas 1941) There exists a Lagrangian L : [0, T] × TM → R such that the equations (E) are its Euler–Lagrange equations if and only if there exists a non-singular symmetric matrix g with entries gij depending on both u and v satisfying the following three Helmholtz conditions: g Φ = ( g Φ ) ⊤ , (H1) {\displaystyle g\Phi =(g\Phi )^{\top },\quad {\mbox{(H1)}}} d g i j d t + 1 2 ∂ f k ∂ v i g k j + 1 2 ∂ f k ∂ v j g k i = 0 for 1 ≤ i , j ≤ n , (H2) {\displaystyle {\frac {\mathrm {d} g_{ij}}{\mathrm {d} t}}+{\frac {1}{2}}{\frac {\partial f^{k}}{\partial v^{i}}}g_{kj}+{\frac {1}{2}}{\frac {\partial f^{k}}{\partial v^{j}}}g_{ki}=0{\mbox{ for }}1\leq i,j\leq n,\quad {\mbox{(H2)}}} ∂ g i j ∂ v k = ∂ g i k ∂ v j for 1 ≤ i , j , k ≤ n . (H3) {\displaystyle {\frac {\partial g_{ij}}{\partial v^{k}}}={\frac {\partial g_{ik}}{\partial v^{j}}}{\mbox{ for }}1\leq i,j,k\leq n.\quad {\mbox{(H3)}}} (The Einstein summation convention is in use for the repeated indices.) === Applying Douglas' theorem === At first glance, solving the Helmholtz equations (H1)–(H3) seems to be an extremely difficult task. Condition (H1) is the easiest to solve: it is always possible to find a g that satisfies (H1), and it alone will not imply that the Lagrangian is singular. Equation (H2) is a system of ordinary differential equations: the usual theorems on the existence and uniqueness of solutions to ordinary differential equations imply that it is, in principle, possible to solve (H2). Integration does not yield additional constants but instead first integrals of the system (E), so this step becomes difficult in practice unless (E) has enough explicit first integrals. In certain well-behaved cases (e.g. the geodesic flow for the canonical connection on a Lie group), this condition is satisfied. The final and most difficult step is to solve equation (H3), called the closure conditions since (H3) is the condition that the differential 1-form gi is a closed form for each i. The reason why this is so daunting is that (H3) constitutes a large system of coupled partial differential equations: for n degrees of freedom, (H3) constitutes a system of 2 ( n + 1 3 ) {\displaystyle 2\left({\begin{matrix}n+1\\3\end{matrix}}\right)} partial differential equations in the 2n independent variables that are the components gij of g, where ( n k ) {\displaystyle \left({\begin{matrix}n\\k\end{matrix}}\right)} denotes the binomial coefficient. In order to construct the most general possible Lagrangian, one must solve this huge system! Fortunately, there are some auxiliary conditions that can be imposed in order to help in solving the Helmholtz conditions. First, (H1) is a purely algebraic condition on the unknown matrix g. Auxiliary algebraic conditions on g can be given as follows: define functions Ψjki by Ψ j k i = 1 3 ( ∂ Φ j i ∂ v k − ∂ Φ k i ∂ v j ) . {\displaystyle \Psi _{jk}^{i}={\frac {1}{3}}\left({\frac {\partial \Phi _{j}^{i}}{\partial v^{k}}}-{\frac {\partial \Phi _{k}^{i}}{\partial v^{j}}}\right).} The auxiliary condition on g is then g m i Ψ j k m + g m k Ψ i j m + g m j Ψ k i m = 0 for 1 ≤ i , j ≤ n . (A) {\displaystyle g_{mi}\Psi _{jk}^{m}+g_{mk}\Psi _{ij}^{m}+g_{mj}\Psi _{ki}^{m}=0{\mbox{ for }}1\leq i,j\leq n.\quad {\mbox{(A)}}} In fact, the equations (H2) and (A) are just the first in an infinite hierarchy of similar algebraic conditions. In the case of a parallel connection (such as the canonical connection on a Lie group), the higher order conditions are always satisfied, so only (H2) and (A) are of interest. Note that (A) comprises ( n 3 ) {\displaystyle \left({\begin{matrix}n\\3\end{matrix}}\right)} conditions whereas (H1) comprises ( n 2 ) {\displaystyle \left({\begin{matrix}n\\2\end{matrix}}\right)} conditions. Thus, it is possible that (H1) and (A) together imply that the Lagrangian function is singular. As of 2006, there is no general theorem to circumvent this difficulty in arbitrary dimension, although certain special cases have been resolved. A second avenue of attack is to see whether the system (E) admits a submersion onto a lower-dimensional system and to try to "lift" a Lagrangian for the lower-dimensional system up to the higher-dimensional one. This is not really an attempt to solve the Helmholtz conditions so much as it is an attempt to construct a Lagrangian and then show that its Euler–Lagrange equations are indeed the system (E). == References == Douglas, Jesse (1941). "Solution of the inverse problem in the calculus of variations". Transactions of the American Mathematical Society. 50 (1): 71–128. doi:10.2307/1989912. ISSN 0002-9947. JSTOR 1989912. PMC 1077987. PMID 16588312. Rawashdeh, M., & Thompson, G. (2006). "The inverse problem for six-dimensional codimension two nilradical Lie algebras". Journal of Mathematical Physics. 47 (11): 112901. Bibcode:2006JMP....47k2901R. doi:10.1063/1.2378620. ISSN 0022-2488.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Wikipedia/Inverse_problem_for_Lagrangian_mechanics
In mathematical physics, the De Donder–Weyl theory is a generalization of the Hamiltonian formalism in the calculus of variations and classical field theory over spacetime which treats the space and time coordinates on equal footing. In this framework, the Hamiltonian formalism in mechanics is generalized to field theory in the way that a field is represented as a system that varies both in space and in time. This generalization is different from the canonical Hamiltonian formalism in field theory which treats space and time variables differently and describes classical fields as infinite-dimensional systems evolving in time. == De Donder–Weyl formulation of field theory == The De Donder–Weyl theory is based on a change of variables known as Legendre transformation. Let xi be spacetime coordinates, for i = 1 to n (with n = 4 representing 3 + 1 dimensions of space and time), and ya field variables, for a = 1 to m, and L the Lagrangian density L = L ( y a , ∂ i y a , x i ) {\displaystyle L=L(y^{a},\partial _{i}y^{a},x^{i})} With the polymomenta pia defined as p a i = ∂ L / ∂ ( ∂ i y a ) {\displaystyle p_{a}^{i}=\partial L/\partial (\partial _{i}y^{a})} and the De Donder–Weyl Hamiltonian function H defined as H = p a i ∂ i y a − L {\displaystyle H=p_{a}^{i}\partial _{i}y^{a}-L} the De Donder–Weyl equations are: ∂ p a i / ∂ x i = − ∂ H / ∂ y a , ∂ y a / ∂ x i = ∂ H / ∂ p a i {\displaystyle \partial p_{a}^{i}/\partial x^{i}=-\partial H/\partial y^{a}\,,\,\partial y^{a}/\partial x^{i}=\partial H/\partial p_{a}^{i}} This De Donder-Weyl Hamiltonian form of field equations is covariant and it is equivalent to the Euler-Lagrange equations when the Legendre transformation to the variables pia and H is not singular. The theory is a formulation of a covariant Hamiltonian field theory which is different from the canonical Hamiltonian formalism and for n = 1 it reduces to Hamiltonian mechanics (see also action principle in the calculus of variations). Hermann Weyl in 1935 has developed the Hamilton-Jacobi theory for the De Donder–Weyl theory. Similarly to the Hamiltonian formalism in mechanics formulated using the symplectic geometry of phase space the De Donder-Weyl theory can be formulated using the multisymplectic geometry or polysymplectic geometry and the geometry of jet bundles. A generalization of the Poisson brackets to the De Donder–Weyl theory and the representation of De Donder–Weyl equations in terms of generalized Poisson brackets satisfying the Gerstenhaber algebra was found by Kanatchikov in 1993. == History == The formalism, now known as De Donder–Weyl (DW) theory, was developed by Théophile De Donder and Hermann Weyl. Hermann Weyl made his proposal in 1934 being inspired by the work of Constantin Carathéodory, which in turn was founded on the work of Vito Volterra. The work of De Donder on the other hand started from the theory of integral invariants of Élie Cartan. The De Donder–Weyl theory has been a part of the calculus of variations since the 1930s and initially it found very few applications in physics. Recently it was applied in theoretical physics in the context of quantum field theory and quantum gravity. In 1970, Jedrzej Śniatycki, the author of Geometric quantization and quantum mechanics, developed an invariant geometrical formulation of jet bundles, building on the work of De Donder and Weyl. In 1999 Igor Kanatchikov has shown that the De Donder–Weyl covariant Hamiltonian field equations can be formulated in terms of Duffin–Kemmer–Petiau matrices. == See also == Hamiltonian field theory Covariant Hamiltonian field theory == Further reading == Selected papers on GEODESIC FIELDS, Translated and edited by D. H. Delphenich. Part 1 [2] Archived 2016-10-21 at the Wayback Machine, Part 2 [3] Archived 2016-10-20 at the Wayback Machine H.A. Kastrup, Canonical theories of Lagrangian dynamical systems in physics, Physics Reports, Volume 101, Issues 1–2, Pages 1-167 (1983). Mark J. Gotay, James Isenberg, Jerrold E. Marsden, Richard Montgomery: "Momentum Maps and Classical Relativistic Fields. Part I: Covariant Field Theory" arXiv:physics/9801019 Cornelius Paufler, Hartmann Römer: De Donder–Weyl equations and multisymplectic geometry Archived 2012-04-15 at the Wayback Machine, Reports on Mathematical Physics, vol. 49 (2002), no. 2–3, pp. 325–334 Krzysztof Maurin: The Riemann legacy: Riemannian ideas in mathematics and physics, Part II, Chapter 7.16 Field theories for calculus of variation for multiple integrals, Kluwer Academic Publishers, ISBN 0-7923-4636-X, 1997, p. 482 ff. == References ==
Wikipedia/De_Donder–Weyl_theory
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory. Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory. == General method == Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition). We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function. Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows. Minimize the continuous-time cost functional J [ x ( ⋅ ) , u ( ⋅ ) , t 0 , t f ] := E [ x ( t 0 ) , t 0 , x ( t f ) , t f ] + ∫ t 0 t f F [ x ( t ) , u ( t ) , t ] d t {\displaystyle J[{\textbf {x}}(\cdot ),{\textbf {u}}(\cdot ),t_{0},t_{f}]:=E\,[{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}]+\int _{t_{0}}^{t_{f}}F\,[{\textbf {x}}(t),{\textbf {u}}(t),t]\,\mathrm {d} t} subject to the first-order dynamic constraints (the state equation) x ˙ ( t ) = f [ x ( t ) , u ( t ) , t ] , {\displaystyle {\dot {\textbf {x}}}(t)={\textbf {f}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t],} the algebraic path constraints h [ x ( t ) , u ( t ) , t ] ≤ 0 , {\displaystyle {\textbf {h}}\,[{\textbf {x}}(t),{\textbf {u}}(t),t]\leq {\textbf {0}},} and the endpoint conditions e [ x ( t 0 ) , t 0 , x ( t f ) , t f ] = 0 {\displaystyle {\textbf {e}}[{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}]=0} where x ( t ) {\displaystyle {\textbf {x}}(t)} is the state, u ( t ) {\displaystyle {\textbf {u}}(t)} is the control, t {\displaystyle t} is the independent variable (generally speaking, time), t 0 {\displaystyle t_{0}} is the initial time, and t f {\displaystyle t_{f}} is the terminal time. The terms E {\displaystyle E} and F {\displaystyle F} are called the endpoint cost and the running cost respectively. In the calculus of variations, E {\displaystyle E} and F {\displaystyle F} are referred to as the Mayer term and the Lagrangian, respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution [ x ∗ ( t ) , u ∗ ( t ) , t 0 ∗ , t f ∗ ] {\displaystyle [{\textbf {x}}^{*}(t),{\textbf {u}}^{*}(t),t_{0}^{*},t_{f}^{*}]} to the optimal control problem is locally minimizing. == Linear quadratic control == A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functional J = 1 2 x T ( t f ) S f x ( t f ) + 1 2 ∫ t 0 t f [ x T ( t ) Q ( t ) x ( t ) + u T ( t ) R ( t ) u ( t ) ] d t {\displaystyle J={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}(t_{f})\mathbf {S} _{f}\mathbf {x} (t_{f})+{\tfrac {1}{2}}\int _{t_{0}}^{t_{f}}[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} (t)\mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} (t)\mathbf {u} (t)]\,\mathrm {d} t} Subject to the linear first-order dynamic constraints x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) u ( t ) , {\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t),} and the initial condition x ( t 0 ) = x 0 {\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}} A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (i.e., A {\displaystyle \mathbf {A} } , B {\displaystyle \mathbf {B} } , Q {\displaystyle \mathbf {Q} } , and R {\displaystyle \mathbf {R} } ) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit t f → ∞ {\displaystyle t_{f}\rightarrow \infty } (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional J = 1 2 ∫ 0 ∞ [ x T ( t ) Q x ( t ) + u T ( t ) R u ( t ) ] d t {\displaystyle J={\tfrac {1}{2}}\int _{0}^{\infty }[\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} \mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} \mathbf {u} (t)]\,\mathrm {d} t} Subject to the linear time-invariant first-order dynamic constraints x ˙ ( t ) = A x ( t ) + B u ( t ) , {\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t),} and the initial condition x ( t 0 ) = x 0 {\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}} In the finite-horizon case the matrices are restricted in that Q {\displaystyle \mathbf {Q} } and R {\displaystyle \mathbf {R} } are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices Q {\displaystyle \mathbf {Q} } and R {\displaystyle \mathbf {R} } are not only positive-semidefinite and positive-definite, respectively, but are also constant. These additional restrictions on Q {\displaystyle \mathbf {Q} } and R {\displaystyle \mathbf {R} } in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, the additional restriction is imposed that the pair ( A , B ) {\displaystyle (\mathbf {A} ,\mathbf {B} )} is controllable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form). The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form u ( t ) = − K ( t ) x ( t ) {\displaystyle \mathbf {u} (t)=-\mathbf {K} (t)\mathbf {x} (t)} where K ( t ) {\displaystyle \mathbf {K} (t)} is a properly dimensioned matrix, given as K ( t ) = R − 1 B T S ( t ) , {\displaystyle \mathbf {K} (t)=\mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t),} and S ( t ) {\displaystyle \mathbf {S} (t)} is the solution of the differential Riccati equation. The differential Riccati equation is given as S ˙ ( t ) = − S ( t ) A − A T S ( t ) + S ( t ) B R − 1 B T S ( t ) − Q {\displaystyle {\dot {\mathbf {S} }}(t)=-\mathbf {S} (t)\mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} (t)+\mathbf {S} (t)\mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t)-\mathbf {Q} } For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition S ( t f ) = S f {\displaystyle \mathbf {S} (t_{f})=\mathbf {S} _{f}} For the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given as 0 = − S A − A T S + S B R − 1 B T S − Q {\displaystyle \mathbf {0} =-\mathbf {S} \mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} +\mathbf {S} \mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} -\mathbf {Q} } Understanding that the ARE arises from infinite horizon problem, the matrices A {\displaystyle \mathbf {A} } , B {\displaystyle \mathbf {B} } , Q {\displaystyle \mathbf {Q} } , and R {\displaystyle \mathbf {R} } are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf E. Kálmán. == Numerical methods for optimal control == Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control (c. 1950s to 1980s) the favored approach for solving optimal control problems was that of indirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the form x ˙ = ∂ H ∂ λ λ ˙ = − ∂ H ∂ x {\displaystyle {\begin{aligned}{\dot {\textbf {x}}}&={\frac {\partial H}{\partial {\boldsymbol {\lambda }}}}\\[1.2ex]{\dot {\boldsymbol {\lambda }}}&=-{\frac {\partial H}{\partial {\textbf {x}}}}\end{aligned}}} where H = F + λ T f − μ T h {\displaystyle H=F+{\boldsymbol {\lambda }}^{\mathsf {T}}{\textbf {f}}-{\boldsymbol {\mu }}^{\mathsf {T}}{\textbf {h}}} is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or transversality conditions). The beauty of using an indirect method is that the state and adjoint (i.e., λ {\displaystyle {\boldsymbol {\lambda }}} ) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO. The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called direct methods. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a cost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form: Minimize F ( z ) {\displaystyle F(\mathbf {z} )} subject to the algebraic constraints g ( z ) = 0 h ( z ) ≤ 0 {\displaystyle {\begin{aligned}\mathbf {g} (\mathbf {z} )&=\mathbf {0} \\\mathbf {h} (\mathbf {z} )&\leq \mathbf {0} \end{aligned}}} Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control) or may be quite large (e.g., a direct collocation method). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct collocation methods which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include DIRCOL, SOCS, OTIS, GESOP/ASTOS, DITAN. and PyGMO/PyKEP. In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include RIOTS, DIDO, DIRECT, FALCON.m, and GPOPS, while an example of an industry developed MATLAB tool is PROPT. These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN. == Discrete-time optimal control == The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method RIOTS is based on the Theory of Consistent Approximation. == Examples == A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) λ ( t ) {\displaystyle \lambda (t)} . The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when λ ( t ) {\displaystyle \lambda (t)} can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values. Having obtained λ ( t ) {\displaystyle \lambda (t)} , the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of λ ( t ) {\displaystyle \lambda (t)} . Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time. === Finite time === Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date 0 {\displaystyle 0} to date T {\displaystyle T} . At date 0 {\displaystyle 0} there is x 0 {\displaystyle x_{0}} ore in the ground, and the time-dependent amount of ore x ( t ) {\displaystyle x(t)} left in the ground declines at the rate of u ( t ) {\displaystyle u(t)} that the mine owner extracts it. The mine owner extracts ore at cost u ( t ) 2 / x ( t ) {\displaystyle u(t)^{2}/x(t)} (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price p {\displaystyle p} . Any ore left in the ground at time T {\displaystyle T} cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time u ( t ) {\displaystyle u(t)} to maximize profits over the period of ownership with no time discounting. == See also == == References == == Further reading == Bertsekas, D. P. (1995). Dynamic Programming and Optimal Control. Belmont: Athena. ISBN 1-886529-11-6. Bryson, A. E.; Ho, Y.-C. (1975). Applied Optimal Control: Optimization, Estimation and Control (Revised ed.). New York: John Wiley and Sons. ISBN 0-470-11481-9. Fleming, W. H.; Rishel, R. W. (1975). Deterministic and Stochastic Optimal Control. New York: Springer. ISBN 0-387-90155-8. Kamien, M. I.; Schwartz, N. L. (1991). Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management (Second ed.). New York: Elsevier. ISBN 0-444-01609-0. Kirk, D. E. (1970). Optimal Control Theory: An Introduction. Englewood Cliffs: Prentice-Hall. ISBN 0-13-638098-0. == External links == Victor M. Becerra, ed. (2008). "Optimal control". Scholarpedia. Retrieved 31 December 2022. Computational Optimal Control Dr. Benoît CHACHUAT: Automatic Control Laboratory – Nonlinear Programming, Calculus of Variations and Optimal Control. DIDO - MATLAB tool for optimal control GEKKO - Python package for optimal control GESOP – Graphical Environment for Simulation and OPtimization GPOPS-II – General-Purpose MATLAB Optimal Control Software CasADi – Free and open source symbolic framework for optimal control PROPT – MATLAB Optimal Control Software OpenOCL – Open Optimal Control Library Archived 20 April 2019 at the Wayback Machine acados – open-source software framework for nonlinear optimal control Rockit (Rapid Optimal Control kit) – a software framework to quickly prototype optimal control problems Elmer G. Wiens: Optimal Control – Applications of Optimal Control Theory Using the Pontryagin Maximum Principle with interactive models. On Optimal Control by Yu-Chi Ho Pseudospectral optimal control: Part 1 Pseudospectral optimal control: Part 2 Lecture Recordings and Script by Prof. Moritz Diehl, University of Freiburg on Numerical Optimal Control
Wikipedia/Optimal_control_theory
Variational methods in general relativity refers to various mathematical techniques that employ the use of variational calculus in Einstein's theory of general relativity. The most commonly used tools are Lagrangians and Hamiltonians and are used to derive the Einstein field equations. == Lagrangian methods == The equations of motion in physical theories can often be derived from an object called the Lagrangian. In classical mechanics, this object is usually of the form, 'kinetic energy − potential energy'. In general, the Lagrangian is that function which when integrated over produces the Action functional. David Hilbert gave an early and classic formulation of the equations in Einstein's general relativity. This used the functional now called the Einstein-Hilbert action. == See also == Palatini action Plebanski action MacDowell–Mansouri action Freidel–Starodubtsev action Mathematics of general relativity Fermat's and energy variation principles in field theory == References ==
Wikipedia/Variational_methods_in_general_relativity
In mathematics, specifically in the calculus of variations, a variation δf of a function f can be concentrated on an arbitrarily small interval, but not a single point. Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function δf. The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose δf concentrated on an interval on which f keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed. == Basic version == If a continuous function f {\displaystyle f} on an open interval ( a , b ) {\displaystyle (a,b)} satisfies the equality ∫ a b f ( x ) h ( x ) d x = 0 {\displaystyle \int _{a}^{b}f(x)h(x)\,\mathrm {d} x=0} for all compactly supported smooth functions h {\displaystyle h} on ( a , b ) {\displaystyle (a,b)} , then f {\displaystyle f} is identically zero. Here "smooth" may be interpreted as "infinitely differentiable", but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside [ c , d ] {\displaystyle [c,d]} for some c {\displaystyle c} , d {\displaystyle d} such that a < c < d < b {\displaystyle a<c<d<b} "; but often a weaker statement suffices, assuming only that h {\displaystyle h} (or h {\displaystyle h} and a number of its derivatives) vanishes at the endpoints a {\displaystyle a} , b {\displaystyle b} ; in this case the closed interval [ a , b ] {\displaystyle [a,b]} is used. == Proof == Suppose f ( x ¯ ) ≠ 0 {\displaystyle f({\bar {x}})\neq 0} for some x ¯ ∈ ( a , b ) {\displaystyle {\bar {x}}\in (a,b)} . Since f {\displaystyle f} is continuous, it is nonzero with the same sign for some c , d {\displaystyle c,d} such that a < c < x ¯ < d < b {\displaystyle a<c<{\bar {x}}<d<b} . Without loss of generality, assume f ( x ¯ ) > 0 {\displaystyle f({\bar {x}})>0} . Then take an h {\displaystyle h} that is positive on ( c , d ) {\displaystyle (c,d)} and zero elsewhere, for example h ( x ) = { exp ⁡ ( − 1 ( x − c ) ( d − x ) ) , c < x < d 0 , o t h e r w i s e {\displaystyle h(x)={\begin{cases}\exp \left(-{\frac {1}{(x-c)(d-x)}}\right),&c<x<d\\0,&\mathrm {otherwise} \end{cases}}} . Note this bump function satisfies the properties in the statement, including C ∞ {\displaystyle C^{\infty }} . Since ∫ a b f ( x ) h ( x ) d x > 0 , {\displaystyle \int _{a}^{b}f(x)h(x)dx>0,} we reach a contradiction. == Version for two given functions == If a pair of continuous functions f, g on an interval (a,b) satisfies the equality ∫ a b ( f ( x ) h ( x ) + g ( x ) h ′ ( x ) ) d x = 0 {\displaystyle \int _{a}^{b}(f(x)\,h(x)+g(x)\,h'(x))\,\mathrm {d} x=0} for all compactly supported smooth functions h on (a,b), then g is differentiable, and g' = f everywhere. The special case for g = 0 is just the basic version. Here is the special case for f = 0 (often sufficient). If a continuous function g on an interval (a,b) satisfies the equality ∫ a b g ( x ) h ′ ( x ) d x = 0 {\displaystyle \int _{a}^{b}g(x)\,h'(x)\,\mathrm {d} x=0} for all smooth functions h on (a,b) such that h ( a ) = h ( b ) = 0 {\displaystyle h(a)=h(b)=0} , then g is constant. If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange, while the proof of differentiability of g is due to Paul du Bois-Reymond. == Versions for discontinuous functions == The given functions (f, g) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). Sometimes the given functions are assumed to be piecewise continuous, in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points. == Higher derivatives == If a tuple of continuous functions f 0 , f 1 , … , f n {\displaystyle f_{0},f_{1},\dots ,f_{n}} on an interval (a,b) satisfies the equality ∫ a b ( f 0 ( x ) h ( x ) + f 1 ( x ) h ′ ( x ) + ⋯ + f n ( x ) h ( n ) ( x ) ) d x = 0 {\displaystyle \int _{a}^{b}(f_{0}(x)\,h(x)+f_{1}(x)\,h'(x)+\dots +f_{n}(x)\,h^{(n)}(x))\,\mathrm {d} x=0} for all compactly supported smooth functions h on (a,b), then there exist continuously differentiable functions u 0 , u 1 , … , u n − 1 {\displaystyle u_{0},u_{1},\dots ,u_{n-1}} on (a,b) such that f 0 = u 0 ′ , f 1 = u 0 + u 1 ′ , f 2 = u 1 + u 2 ′ ⋮ f n − 1 = u n − 2 + u n − 1 ′ , f n = u n − 1 {\displaystyle {\begin{aligned}f_{0}&=u'_{0},\\f_{1}&=u_{0}+u'_{1},\\f_{2}&=u_{1}+u'_{2}\\\vdots \\f_{n-1}&=u_{n-2}+u'_{n-1},\\f_{n}&=u_{n-1}\end{aligned}}} everywhere. This necessary condition is also sufficient, since the integrand becomes ( u 0 h ) ′ + ( u 1 h ′ ) ′ + ⋯ + ( u n − 1 h ( n − 1 ) ) ′ . {\displaystyle (u_{0}h)'+(u_{1}h')'+\dots +(u_{n-1}h^{(n-1)})'.} The case n = 1 is just the version for two given functions, since f = f 0 = u 0 ′ {\displaystyle f=f_{0}=u'_{0}} and f 1 = u 0 , {\displaystyle f_{1}=u_{0},} thus, f 0 − f 1 ′ = 0. {\displaystyle f_{0}-f'_{1}=0.} In contrast, the case n=2 does not lead to the relation f 0 − f 1 ′ + f 2 ″ = 0 , {\displaystyle f_{0}-f'_{1}+f''_{2}=0,} since the function f 2 = u 1 {\displaystyle f_{2}=u_{1}} need not be differentiable twice. The sufficient condition f 0 − f 1 ′ + f 2 ″ = 0 {\displaystyle f_{0}-f'_{1}+f''_{2}=0} is not necessary. Rather, the necessary and sufficient condition may be written as f 0 − ( f 1 − f 2 ′ ) ′ = 0 {\displaystyle f_{0}-(f_{1}-f'_{2})'=0} for n=2, f 0 − ( f 1 − ( f 2 − f 3 ′ ) ′ ) ′ = 0 {\displaystyle f_{0}-(f_{1}-(f_{2}-f'_{3})')'=0} for n=3, and so on; in general, the brackets cannot be opened because of non-differentiability. == Vector-valued functions == Generalization to vector-valued functions ( a , b ) → R d {\displaystyle (a,b)\to \mathbb {R} ^{d}} is straightforward; one applies the results for scalar functions to each coordinate separately, or treats the vector-valued case from the beginning. == Multivariable functions == If a continuous multivariable function f on an open set Ω ⊂ R d {\displaystyle \Omega \subset \mathbb {R} ^{d}} satisfies the equality ∫ Ω f ( x ) h ( x ) d x = 0 {\displaystyle \int _{\Omega }f(x)\,h(x)\,\mathrm {d} x=0} for all compactly supported smooth functions h on Ω, then f is identically zero. Similarly to the basic version, one may consider a continuous function f on the closure of Ω, assuming that h vanishes on the boundary of Ω (rather than compactly supported). Here is a version for discontinuous multivariable functions. Let Ω ⊂ R d {\displaystyle \Omega \subset \mathbb {R} ^{d}} be an open set, and f ∈ L 2 ( Ω ) {\displaystyle f\in L^{2}(\Omega )} satisfy the equality ∫ Ω f ( x ) h ( x ) d x = 0 {\displaystyle \int _{\Omega }f(x)\,h(x)\,\mathrm {d} x=0} for all compactly supported smooth functions h on Ω. Then f=0 (in L2, that is, almost everywhere). == Applications == This lemma is used to prove that extrema of the functional J [ y ] = ∫ x 0 x 1 L ( t , y ( t ) , y ˙ ( t ) ) d t {\displaystyle J[y]=\int _{x_{0}}^{x_{1}}L(t,y(t),{\dot {y}}(t))\,\mathrm {d} t} are weak solutions y : [ x 0 , x 1 ] → V {\displaystyle y:[x_{0},x_{1}]\to V} (for an appropriate vector space V {\displaystyle V} ) of the Euler–Lagrange equation ∂ L ( t , y ( t ) , y ˙ ( t ) ) ∂ y = d d t ∂ L ( t , y ( t ) , y ˙ ( t ) ) ∂ y ˙ . {\displaystyle {\partial L(t,y(t),{\dot {y}}(t)) \over \partial y}={\mathrm {d} \over \mathrm {d} t}{\partial L(t,y(t),{\dot {y}}(t)) \over \partial {\dot {y}}}.} The Euler–Lagrange equation plays a prominent role in classical mechanics and differential geometry. == Notes == == References == Jost, Jürgen; Li-Jost, Xianqing (1998), Calculus of variations, Cambridge University Gelfand, I.M.; Fomin, S.V. (1963), Calculus of variations, Prentice-Hall (transl. from Russian). Hestenes, Magnus R. (1966), Calculus of variations and optimal control theory, John Wiley Giaquinta, Mariano; Hildebrandt, Stefan (1996), Calculus of Variations I, Springer Liberzon, Daniel (2012), Calculus of Variations and Optimal Control Theory, Princeton University Press
Wikipedia/Fundamental_lemma_of_calculus_of_variations
In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle. The method consists of choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible. The wavefunction obtained by fixing the parameters to such values is then an approximation to the ground state wavefunction, and the expectation value of the energy in that state is an upper bound to the ground state energy. The Hartree–Fock method, density matrix renormalization group, and Ritz method apply the variational method. == Description == Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian H {\displaystyle H} . Ignoring complications about continuous spectra, we consider the discrete spectrum of H {\displaystyle H} and a basis of eigenvectors { | ψ λ ⟩ } {\displaystyle \{|\psi _{\lambda }\rangle \}} (see spectral theorem for Hermitian operators for the mathematical background): ⟨ ψ λ 1 | ψ λ 2 ⟩ = δ λ 1 λ 2 , {\displaystyle \left\langle \psi _{\lambda _{1}}|\psi _{\lambda _{2}}\right\rangle =\delta _{\lambda _{1}\lambda _{2}},} where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta δ i j = { 0 if i ≠ j , 1 if i = j , {\displaystyle \delta _{ij}={\begin{cases}0&{\text{if }}i\neq j,\\1&{\text{if }}i=j,\end{cases}}} and the { | ψ λ ⟩ } {\displaystyle \{|\psi _{\lambda }\rangle \}} satisfy the eigenvalue equation H | ψ λ ⟩ = λ | ψ λ ⟩ . {\displaystyle H\left|\psi _{\lambda }\right\rangle =\lambda \left|\psi _{\lambda }\right\rangle .} Once again ignoring complications involved with a continuous spectrum of H {\displaystyle H} , suppose the spectrum of H {\displaystyle H} is bounded from below and that its greatest lower bound is E0. The expectation value of H {\displaystyle H} in a state | ψ ⟩ {\displaystyle |\psi \rangle } is then ⟨ ψ | H | ψ ⟩ = ∑ λ 1 , λ 2 ∈ S p e c ( H ) ⟨ ψ | ψ λ 1 ⟩ ⟨ ψ λ 1 | H | ψ λ 2 ⟩ ⟨ ψ λ 2 | ψ ⟩ = ∑ λ ∈ S p e c ( H ) λ | ⟨ ψ λ | ψ ⟩ | 2 ≥ ∑ λ ∈ S p e c ( H ) E 0 | ⟨ ψ λ | ψ ⟩ | 2 = E 0 ⟨ ψ | ψ ⟩ . {\displaystyle {\begin{aligned}\left\langle \psi \right|H\left|\psi \right\rangle &=\sum _{\lambda _{1},\lambda _{2}\in \mathrm {Spec} (H)}\left\langle \psi |\psi _{\lambda _{1}}\right\rangle \left\langle \psi _{\lambda _{1}}\right|H\left|\psi _{\lambda _{2}}\right\rangle \left\langle \psi _{\lambda _{2}}|\psi \right\rangle \\&=\sum _{\lambda \in \mathrm {Spec} (H)}\lambda \left|\left\langle \psi _{\lambda }|\psi \right\rangle \right|^{2}\geq \sum _{\lambda \in \mathrm {Spec} (H)}E_{0}\left|\left\langle \psi _{\lambda }|\psi \right\rangle \right|^{2}=E_{0}\langle \psi |\psi \rangle .\end{aligned}}} If we were to vary over all possible states with norm 1 trying to minimize the expectation value of H {\displaystyle H} , the lowest value would be E 0 {\displaystyle E_{0}} and the corresponding state would be the ground state, as well as an eigenstate of H {\displaystyle H} . Varying over the entire Hilbert space is usually too complicated for physical calculations, and a subspace of the entire Hilbert space is chosen, parametrized by some (real) differentiable parameters αi (i = 1, 2, ..., N). The choice of the subspace is called the ansatz. Some choices of ansatzes lead to better approximations than others, therefore the choice of ansatz is important. Let's assume there is some overlap between the ansatz and the ground state (otherwise, it's a bad ansatz). We wish to normalize the ansatz, so we have the constraints ⟨ ψ ( α ) | ψ ( α ) ⟩ = 1 {\displaystyle \left\langle \psi (\mathbf {\alpha } )|\psi (\mathbf {\alpha } )\right\rangle =1} and we wish to minimize ε ( α ) = ⟨ ψ ( α ) | H | ψ ( α ) ⟩ . {\displaystyle \varepsilon (\mathbf {\alpha } )=\left\langle \psi (\mathbf {\alpha } )\right|H\left|\psi (\mathbf {\alpha } )\right\rangle .} This, in general, is not an easy task, since we are looking for a global minimum and finding the zeroes of the partial derivatives of ε over all αi is not sufficient. If ψ(α) is expressed as a linear combination of other functions (αi being the coefficients), as in the Ritz method, there is only one minimum and the problem is straightforward. There are other, non-linear methods, however, such as the Hartree–Fock method, that are also not characterized by a multitude of minima and are therefore comfortable in calculations. There is an additional complication in the calculations described. As ε tends toward E0 in minimization calculations, there is no guarantee that the corresponding trial wavefunctions will tend to the actual wavefunction. This has been demonstrated by calculations using a modified harmonic oscillator as a model system, in which an exactly solvable system is approached using the variational method. A wavefunction different from the exact one is obtained by use of the method described above. Although usually limited to calculations of the ground state energy, this method can be applied in certain cases to calculations of excited states as well. If the ground state wavefunction is known, either by the method of variation or by direct calculation, a subset of the Hilbert space can be chosen which is orthogonal to the ground state wavefunction. | ψ ⟩ = | ψ test ⟩ − ⟨ ψ g r | ψ test ⟩ | ψ gr ⟩ {\displaystyle \left|\psi \right\rangle =\left|\psi _{\text{test}}\right\rangle -\left\langle \psi _{\mathrm {gr} }|\psi _{\text{test}}\right\rangle \left|\psi _{\text{gr}}\right\rangle } The resulting minimum is usually not as accurate as for the ground state, as any difference between the true ground state and ψ gr {\displaystyle \psi _{\text{gr}}} results in a lower excited energy. This defect is worsened with each higher excited state. In another formulation: E ground ≤ ⟨ ϕ | H | ϕ ⟩ . {\displaystyle E_{\text{ground}}\leq \left\langle \phi \right|H\left|\phi \right\rangle .} This holds for any trial φ since, by definition, the ground state wavefunction has the lowest energy, and any trial wavefunction will have energy greater than or equal to it. Proof: φ can be expanded as a linear combination of the actual eigenfunctions of the Hamiltonian (which we assume to be normalized and orthogonal): ϕ = ∑ n c n ψ n . {\displaystyle \phi =\sum _{n}c_{n}\psi _{n}.} Then, to find the expectation value of the Hamiltonian: ⟨ H ⟩ = ⟨ ϕ | H | ϕ ⟩ = ⟨ ∑ n c n ψ n | H | ∑ m c m ψ m ⟩ = ∑ n ∑ m ⟨ c n ∗ ψ n | E m | c m ψ m ⟩ = ∑ n ∑ m c n ∗ c m E m ⟨ ψ n | ψ m ⟩ = ∑ n | c n | 2 E n . {\displaystyle {\begin{aligned}\left\langle H\right\rangle =\left\langle \phi \right|H\left|\phi \right\rangle ={}&\left\langle \sum _{n}c_{n}\psi _{n}\right|H\left|\sum _{m}c_{m}\psi _{m}\right\rangle \\={}&\sum _{n}\sum _{m}\left\langle c_{n}^{*}\psi _{n}\right|E_{m}\left|c_{m}\psi _{m}\right\rangle \\={}&\sum _{n}\sum _{m}c_{n}^{*}c_{m}E_{m}\left\langle \psi _{n}|\psi _{m}\right\rangle \\={}&\sum _{n}|c_{n}|^{2}E_{n}.\end{aligned}}} Now, the ground state energy is the lowest energy possible, i.e., E n ≥ E ground {\displaystyle E_{n}\geq E_{\text{ground}}} . Therefore, if the guessed wave function φ is normalized: ⟨ ϕ | H | ϕ ⟩ ≥ E ground ∑ n | c n | 2 = E ground . {\displaystyle \left\langle \phi \right|H\left|\phi \right\rangle \geq E_{\text{ground}}\sum _{n}|c_{n}|^{2}=E_{\text{ground}}.} === In general === For a hamiltonian H that describes the studied system and any normalizable function Ψ with arguments appropriate for the unknown wave function of the system, we define the functional ε [ Ψ ] = ⟨ Ψ | H ^ | Ψ ⟩ ⟨ Ψ | Ψ ⟩ . {\displaystyle \varepsilon \left[\Psi \right]={\frac {\left\langle \Psi \right|{\hat {H}}\left|\Psi \right\rangle }{\left\langle \Psi |\Psi \right\rangle }}.} The variational principle states that ε ≥ E 0 {\displaystyle \varepsilon \geq E_{0}} , where E 0 {\displaystyle E_{0}} is the lowest energy eigenstate (ground state) of the hamiltonian ε = E 0 {\displaystyle \varepsilon =E_{0}} if and only if Ψ {\displaystyle \Psi } is exactly equal to the wave function of the ground state of the studied system. The variational principle formulated above is the basis of the variational method used in quantum mechanics and quantum chemistry to find approximations to the ground state. Another facet in variational principles in quantum mechanics is that since Ψ {\displaystyle \Psi } and Ψ † {\displaystyle \Psi ^{\dagger }} can be varied separately (a fact arising due to the complex nature of the wave function), the quantities can be varied in principle just one at a time. == Helium atom ground state == The helium atom consists of two electrons with mass m and electric charge −e, around an essentially fixed nucleus of mass M ≫ m and charge +2e. The Hamiltonian for it, neglecting the fine structure, is: H = − ℏ 2 2 m ( ∇ 1 2 + ∇ 2 2 ) − e 2 4 π ε 0 ( 2 r 1 + 2 r 2 − 1 | r 1 − r 2 | ) {\displaystyle H=-{\frac {\hbar ^{2}}{2m}}\left(\nabla _{1}^{2}+\nabla _{2}^{2}\right)-{\frac {e^{2}}{4\pi \varepsilon _{0}}}\left({\frac {2}{r_{1}}}+{\frac {2}{r_{2}}}-{\frac {1}{|\mathbf {r} _{1}-\mathbf {r} _{2}|}}\right)} where ħ is the reduced Planck constant, ε0 is the vacuum permittivity, ri (for i = 1, 2) is the distance of the i-th electron from the nucleus, and |r1 − r2| is the distance between the two electrons. If the term Vee = e2/(4πε0|r1 − r2|), representing the repulsion between the two electrons, were excluded, the Hamiltonian would become the sum of two hydrogen-like atom Hamiltonians with nuclear charge +2e. The ground state energy would then be 8E1 = −109 eV, where E1 is the Rydberg constant, and its ground state wavefunction would be the product of two wavefunctions for the ground state of hydrogen-like atoms:: 262  ψ ( r 1 , r 2 ) = Z 3 π a 0 3 e − Z ( r 1 + r 2 ) / a 0 . {\displaystyle \psi (\mathbf {r} _{1},\mathbf {r} _{2})={\frac {Z^{3}}{\pi a_{0}^{3}}}e^{-Z\left(r_{1}+r_{2}\right)/a_{0}}.} where a0 is the Bohr radius and Z = 2, helium's nuclear charge. The expectation value of the total Hamiltonian H (including the term Vee) in the state described by ψ0 will be an upper bound for its ground state energy. ⟨Vee⟩ is −5E1/2 = 34 eV, so ⟨H⟩ is 8E1 − 5E1/2 = −75 eV. A tighter upper bound can be found by using a better trial wavefunction with 'tunable' parameters. Each electron can be thought to see the nuclear charge partially "shielded" by the other electron, so we can use a trial wavefunction equal with an "effective" nuclear charge Z < 2: The expectation value of H in this state is: ⟨ H ⟩ = [ − 2 Z 2 + 27 4 Z ] E 1 {\displaystyle \left\langle H\right\rangle =\left[-2Z^{2}+{\frac {27}{4}}Z\right]E_{1}} This is minimal for Z = 27/16 implying shielding reduces the effective charge to ~1.69. Substituting this value of Z into the expression for H yields 729E1/128 = −77.5 eV, within 2% of the experimental value, −78.975 eV. Even closer estimations of this energy have been found using more complicated trial wave functions with more parameters. This is done in physical chemistry via variational Monte Carlo. == References ==
Wikipedia/Variational_method_(quantum_mechanics)
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. In this method, an infinite-dimensional linear operator is approximated by a finite-dimensional compression, on which we can use an eigenvalue algorithm. It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical and structural engineering to approximate the eigenmodes and resonant frequencies of a structure. == Naming and attribution == The name of the method and its origin story have been debated by historians. It has been called Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to A. W. Leissa, Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the case for the name Rayleigh–Ritz method. According to S. Ilanko, citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems simpler approximating extremum problems in which a finite number of parameters need to be determined. Ironically for the debate, the modern justification of the algorithm drops the calculus of variations in favor of the simpler and more general approach of orthogonal projection as in Galerkin method named after Boris Galerkin, thus leading also to the Ritz-Galerkin method naming. == Method == Let T {\displaystyle T} be a linear operator on a Hilbert space H {\displaystyle {\mathcal {H}}} , with inner product ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} . Now consider a finite set of functions L = { φ 1 , . . . , φ n } {\displaystyle {\mathcal {L}}=\{\varphi _{1},...,\varphi _{n}\}} . Depending on the application these functions may be: A subset of the orthonormal basis of the original operator; A space of splines (as in the Galerkin method); A set of functions which approximate the eigenfunctions of the operator. One could use the orthonormal basis generated from the eigenfunctions of the operator, which will produce diagonal approximating matrices, but in this case we would have already had to calculate the spectrum. We now approximate T {\displaystyle T} by T L {\displaystyle T_{\mathcal {L}}} , which is defined as the matrix with entries ( T L ) i , j = ( T φ i , φ j ) . {\displaystyle (T_{\mathcal {L}})_{i,j}=(T\varphi _{i},\varphi _{j}).} and solve the eigenvalue problem T L u = λ u {\displaystyle T_{\mathcal {L}}u=\lambda u} . It can be shown that the matrix T L {\displaystyle T_{\mathcal {L}}} is the compression of T {\displaystyle T} to L {\displaystyle {\mathcal {L}}} . For differential operators (such as Sturm-Liouville operators), the inner product ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} can be replaced by the weak formulation A ( ⋅ , ⋅ ) {\displaystyle {\mathcal {A}}(\cdot ,\cdot )} . If a subset of the orthonormal basis was used to find the matrix, the eigenvectors of T L {\displaystyle T_{\mathcal {L}}} will be linear combinations of orthonormal basis functions, and as a result they will be approximations of the eigenvectors of T {\displaystyle T} . == Properties == === Spectral pollution === It is possible for the Rayleigh–Ritz method to produce values which do not converge to actual values in the spectrum of the operator as the truncation gets large. These values are known as spectral pollution. In some cases (such as for the Schrödinger equation), there is no approximation which both includes all eigenvalues of the equation, and contains no pollution. The spectrum of the compression (and thus pollution) is bounded by the numerical range of the operator; in many cases it is bounded by a subset of the numerical range known as the essential numerical range. == For matrix eigenvalue problems == In numerical linear algebra, the Rayleigh–Ritz method is commonly applied to approximate an eigenvalue problem A x = λ x {\displaystyle A\mathbf {x} =\lambda \mathbf {x} } for the matrix A ∈ C N × N {\displaystyle A\in \mathbb {C} ^{N\times N}} of size N {\displaystyle N} using a projected matrix of a smaller size m < N {\displaystyle m<N} , generated from a given matrix V ∈ C N × m {\displaystyle V\in \mathbb {C} ^{N\times m}} with orthonormal columns. The matrix version of the algorithm is the most simple: Compute the m × m {\displaystyle m\times m} matrix V ∗ A V {\displaystyle V^{*}AV} , where V ∗ {\displaystyle V^{*}} denotes the complex-conjugate transpose of V {\displaystyle V} Solve the eigenvalue problem V ∗ A V y i = μ i y i {\displaystyle V^{*}AV\mathbf {y} _{i}=\mu _{i}\mathbf {y} _{i}} Compute the Ritz vectors x ~ i = V y i {\displaystyle {\tilde {\mathbf {x} }}_{i}=V\mathbf {y} _{i}} and the Ritz value λ ~ i = μ i {\displaystyle {\tilde {\lambda }}_{i}=\mu _{i}} Output approximations ( λ ~ i , x ~ i ) {\displaystyle ({\tilde {\lambda }}_{i},{\tilde {\mathbf {x} }}_{i})} , called the Ritz pairs, to eigenvalues and eigenvectors of the original matrix A {\displaystyle A} . If the subspace with the orthonormal basis given by the columns of the matrix V ∈ C N × m {\displaystyle V\in \mathbb {C} ^{N\times m}} contains k ≤ m {\displaystyle k\leq m} vectors that are close to eigenvectors of the matrix A {\displaystyle A} , the Rayleigh–Ritz method above finds k {\displaystyle k} Ritz vectors that well approximate these eigenvectors. The easily computable quantity ‖ A x ~ i − λ ~ i x ~ i ‖ {\displaystyle \|A{\tilde {\mathbf {x} }}_{i}-{\tilde {\lambda }}_{i}{\tilde {\mathbf {x} }}_{i}\|} determines the accuracy of such an approximation for every Ritz pair. In the easiest case m = 1 {\displaystyle m=1} , the N × m {\displaystyle N\times m} matrix V {\displaystyle V} turns into a unit column-vector v {\displaystyle v} , the m × m {\displaystyle m\times m} matrix V ∗ A V {\displaystyle V^{*}AV} is a scalar that is equal to the Rayleigh quotient ρ ( v ) = v ∗ A v / v ∗ v {\displaystyle \rho (v)=v^{*}Av/v^{*}v} , the only i = 1 {\displaystyle i=1} solution to the eigenvalue problem is y i = 1 {\displaystyle y_{i}=1} and μ i = ρ ( v ) {\displaystyle \mu _{i}=\rho (v)} , and the only one Ritz vector is v {\displaystyle v} itself. Thus, the Rayleigh–Ritz method turns into computing of the Rayleigh quotient if m = 1 {\displaystyle m=1} . Another useful connection to the Rayleigh quotient is that μ i = ρ ( v i ) {\displaystyle \mu _{i}=\rho (v_{i})} for every Ritz pair ( λ ~ i , x ~ i ) {\displaystyle ({\tilde {\lambda }}_{i},{\tilde {\mathbf {x} }}_{i})} , allowing to derive some properties of Ritz values μ i {\displaystyle \mu _{i}} from the corresponding theory for the Rayleigh quotient. For example, if A {\displaystyle A} is a Hermitian matrix, its Rayleigh quotient (and thus its every Ritz value) is real and takes values within the closed interval of the smallest and largest eigenvalues of A {\displaystyle A} . === Example === The matrix A = [ 2 0 0 0 2 1 0 1 2 ] {\displaystyle A={\begin{bmatrix}2&0&0\\0&2&1\\0&1&2\end{bmatrix}}} has eigenvalues 1 , 2 , 3 {\displaystyle 1,2,3} and the corresponding eigenvectors x λ = 1 = [ 0 1 − 1 ] , x λ = 2 = [ 1 0 0 ] , x λ = 3 = [ 0 1 1 ] . {\displaystyle \mathbf {x} _{\lambda =1}={\begin{bmatrix}0\\1\\-1\end{bmatrix}},\quad \mathbf {x} _{\lambda =2}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {x} _{\lambda =3}={\begin{bmatrix}0\\1\\1\end{bmatrix}}.} Let us take V = [ 0 0 1 0 0 1 ] , {\displaystyle V={\begin{bmatrix}0&0\\1&0\\0&1\end{bmatrix}},} then V ∗ A V = [ 2 1 1 2 ] {\displaystyle V^{*}AV={\begin{bmatrix}2&1\\1&2\end{bmatrix}}} with eigenvalues 1 , 3 {\displaystyle 1,3} and the corresponding eigenvectors y μ = 1 = [ 1 − 1 ] , y μ = 3 = [ 1 1 ] , {\displaystyle \mathbf {y} _{\mu =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {y} _{\mu =3}={\begin{bmatrix}1\\1\end{bmatrix}},} so that the Ritz values are 1 , 3 {\displaystyle 1,3} and the Ritz vectors are x ~ λ ~ = 1 = [ 0 1 − 1 ] , x ~ λ ~ = 3 = [ 0 1 1 ] . {\displaystyle \mathbf {\tilde {x}} _{{\tilde {\lambda }}=1}={\begin{bmatrix}0\\1\\-1\end{bmatrix}},\quad \mathbf {\tilde {x}} _{{\tilde {\lambda }}=3}={\begin{bmatrix}0\\1\\1\end{bmatrix}}.} We observe that each one of the Ritz vectors is exactly one of the eigenvectors of A {\displaystyle A} for the given V {\displaystyle V} as well as the Ritz values give exactly two of the three eigenvalues of A {\displaystyle A} . A mathematical explanation for the exact approximation is based on the fact that the column space of the matrix V {\displaystyle V} happens to be exactly the same as the subspace spanned by the two eigenvectors x λ = 1 {\displaystyle \mathbf {x} _{\lambda =1}} and x λ = 3 {\displaystyle \mathbf {x} _{\lambda =3}} in this example. == For matrix singular value problems == Truncated singular value decomposition (SVD) in numerical linear algebra can also use the Rayleigh–Ritz method to find approximations to left and right singular vectors of the matrix M ∈ C M × N {\displaystyle M\in \mathbb {C} ^{M\times N}} of size M × N {\displaystyle M\times N} in given subspaces by turning the singular value problem into an eigenvalue problem. === Using the normal matrix === The definition of the singular value σ {\displaystyle \sigma } and the corresponding left and right singular vectors is M v = σ u {\displaystyle Mv=\sigma u} and M ∗ u = σ v {\displaystyle M^{*}u=\sigma v} . Having found one set (left of right) of approximate singular vectors and singular values by applying naively the Rayleigh–Ritz method to the Hermitian normal matrix M ∗ M ∈ C N × N {\displaystyle M^{*}M\in \mathbb {C} ^{N\times N}} or M M ∗ ∈ C M × M {\displaystyle MM^{*}\in \mathbb {C} ^{M\times M}} , whichever one is smaller size, one could determine the other set of left of right singular vectors simply by dividing by the singular values, i.e., u = M v / σ {\displaystyle u=Mv/\sigma } and v = M ∗ u / σ {\displaystyle v=M^{*}u/\sigma } . However, the division is unstable or fails for small or zero singular values. An alternative approach, e.g., defining the normal matrix as A = M ∗ M ∈ C N × N {\displaystyle A=M^{*}M\in \mathbb {C} ^{N\times N}} of size N × N {\displaystyle N\times N} , takes advantage of the fact that for a given N × m {\displaystyle N\times m} matrix W ∈ C N × m {\displaystyle W\in \mathbb {C} ^{N\times m}} with orthonormal columns the eigenvalue problem of the Rayleigh–Ritz method for the m × m {\displaystyle m\times m} matrix W ∗ A W = W ∗ M ∗ M W = ( M W ) ∗ M W {\displaystyle W^{*}AW=W^{*}M^{*}MW=(MW)^{*}MW} can be interpreted as a singular value problem for the N × m {\displaystyle N\times m} matrix M W {\displaystyle MW} . This interpretation allows simple simultaneous calculation of both left and right approximate singular vectors as follows. Compute the N × m {\displaystyle N\times m} matrix M W {\displaystyle MW} . Compute the thin, or economy-sized, SVD M W = U Σ V h , {\displaystyle MW=\mathbf {U} \Sigma \mathbf {V} _{h},} with N × m {\displaystyle N\times m} matrix U {\displaystyle \mathbf {U} } , m × m {\displaystyle m\times m} diagonal matrix Σ {\displaystyle \Sigma } , and m × m {\displaystyle m\times m} matrix V h {\displaystyle \mathbf {V} _{h}} . Compute the matrices of the Ritz left U = U {\displaystyle U=\mathbf {U} } and right V h = V h W ∗ {\displaystyle V_{h}=\mathbf {V} _{h}W^{*}} singular vectors. Output approximations U , Σ , V h {\displaystyle U,\Sigma ,V_{h}} , called the Ritz singular triplets, to selected singular values and the corresponding left and right singular vectors of the original matrix M {\displaystyle M} representing an approximate Truncated singular value decomposition (SVD) with left singular vectors restricted to the column-space of the matrix W {\displaystyle W} . The algorithm can be used as a post-processing step where the matrix W {\displaystyle W} is an output of an eigenvalue solver, e.g., such as LOBPCG, approximating numerically selected eigenvectors of the normal matrix A = M ∗ M {\displaystyle A=M^{*}M} . ==== Example ==== The matrix M = [ 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 ] {\displaystyle M={\begin{bmatrix}1&0&0&0\\0&2&0&0\\0&0&3&0\\0&0&0&4\\0&0&0&0\end{bmatrix}}} has its normal matrix A = M ∗ M = [ 1 0 0 0 0 4 0 0 0 0 9 0 0 0 0 16 ] , {\displaystyle A=M^{*}M={\begin{bmatrix}1&0&0&0\\0&4&0&0\\0&0&9&0\\0&0&0&16\\\end{bmatrix}},} singular values 1 , 2 , 3 , 4 {\displaystyle 1,2,3,4} and the corresponding thin SVD A = [ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 ] [ 4 0 0 0 0 3 0 0 0 0 2 0 0 0 0 1 ] [ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 ] , {\displaystyle A={\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}{\begin{bmatrix}4&0&0&0\\0&3&0&0\\0&0&2&0\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{bmatrix}},} where the columns of the first multiplier from the complete set of the left singular vectors of the matrix A {\displaystyle A} , the diagonal entries of the middle term are the singular values, and the columns of the last multiplier transposed (although the transposition does not change it) [ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 ] ∗ = [ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 ] {\displaystyle {\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{bmatrix}}^{*}\quad =\quad {\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{bmatrix}}} are the corresponding right singular vectors. Let us take W = [ 1 / 2 1 / 2 1 / 2 − 1 / 2 0 0 0 0 ] {\displaystyle W={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\\0&0\\0&0\end{bmatrix}}} with the column-space that is spanned by the two exact right singular vectors [ 0 1 1 0 0 0 0 0 ] {\displaystyle {\begin{bmatrix}0&1\\1&0\\0&0\\0&0\end{bmatrix}}} corresponding to the singular values 1 and 2. Following the algorithm step 1, we compute M W = [ 1 / 2 1 / 2 2 − 2 0 0 0 0 ] , {\displaystyle MW={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\{\sqrt {2}}&-{\sqrt {2}}\\0&0\\0&0\end{bmatrix}},} and on step 2 its thin SVD M W = U Σ V h {\displaystyle MW=\mathbf {U} {\Sigma }\mathbf {V} _{h}} with U = [ 0 1 1 0 0 0 0 0 0 0 ] , Σ = [ 2 0 0 1 ] , V h = [ 1 / 2 − 1 / 2 1 / 2 1 / 2 ] . {\displaystyle \mathbf {U} ={\begin{bmatrix}0&1\\1&0\\0&0\\0&0\\0&0\end{bmatrix}},\quad \Sigma ={\begin{bmatrix}2&0\\0&1\end{bmatrix}},\quad \mathbf {V} _{h}={\begin{bmatrix}1/{\sqrt {2}}&-1/{\sqrt {2}}\\1/{\sqrt {2}}&1/{\sqrt {2}}\end{bmatrix}}.} Thus we already obtain the singular values 2 and 1 from Σ {\displaystyle \Sigma } and from U {\displaystyle \mathbf {U} } the corresponding two left singular vectors u {\displaystyle u} as [ 0 , 1 , 0 , 0 , 0 ] ∗ {\displaystyle [0,1,0,0,0]^{*}} and [ 1 , 0 , 0 , 0 , 0 ] ∗ {\displaystyle [1,0,0,0,0]^{*}} , which span the column-space of the matrix W {\displaystyle W} , explaining why the approximations are exact for the given W {\displaystyle W} . Finally, step 3 computes the matrix V h = V h W ∗ {\displaystyle V_{h}=\mathbf {V} _{h}W^{*}} V h = [ 1 / 2 − 1 / 2 1 / 2 1 / 2 ] [ 1 / 2 1 / 2 0 0 1 / 2 − 1 / 2 0 0 ] = [ 0 1 0 0 1 0 0 0 ] {\displaystyle \mathbf {V} _{h}={\begin{bmatrix}1/{\sqrt {2}}&-1/{\sqrt {2}}\\1/{\sqrt {2}}&1/{\sqrt {2}}\end{bmatrix}}\,{\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}&0&0\\1/{\sqrt {2}}&-1/{\sqrt {2}}&0&0\end{bmatrix}}={\begin{bmatrix}0&1&0&0\\1&0&0&0\end{bmatrix}}} recovering from its rows the two right singular vectors v {\displaystyle v} as [ 0 , 1 , 0 , 0 ] ∗ {\displaystyle [0,1,0,0]^{*}} and [ 1 , 0 , 0 , 0 ] ∗ {\displaystyle [1,0,0,0]^{*}} . We validate the first vector: M v = σ u {\displaystyle Mv=\sigma u} [ 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 ] [ 0 1 0 0 ] = 2 [ 0 1 0 0 0 ] {\displaystyle {\begin{bmatrix}1&0&0&0\\0&2&0&0\\0&0&3&0\\0&0&0&4\\0&0&0&0\end{bmatrix}}\,{\begin{bmatrix}0\\1\\0\\0\end{bmatrix}}=\,2\,{\begin{bmatrix}0\\1\\0\\0\\0\end{bmatrix}}} and M ∗ u = σ v {\displaystyle M^{*}u=\sigma v} [ 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 ] [ 0 1 0 0 0 ] = 2 [ 0 1 0 0 ] . {\displaystyle {\begin{bmatrix}1&0&0&0&0\\0&2&0&0&0\\0&0&3&0&0\\0&0&0&4&0\end{bmatrix}}\,{\begin{bmatrix}0\\1\\0\\0\\0\end{bmatrix}}=\,2\,{\begin{bmatrix}0\\1\\0\\0\end{bmatrix}}.} Thus, for the given matrix W {\displaystyle W} with its column-space that is spanned by two exact right singular vectors, we determine these right singular vectors, as well as the corresponding left singular vectors and the singular values, all exactly. For an arbitrary matrix W {\displaystyle W} , we obtain approximate singular triplets which are optimal given W {\displaystyle W} in the sense of optimality of the Rayleigh–Ritz method. == Applications and examples == === In quantum physics === In quantum physics, where the spectrum of the Hamiltonian is the set of discrete energy levels allowed by a quantum mechanical system, the Rayleigh–Ritz method is used to approximate the energy states and wavefunctions of a complicated atomic or nuclear system. In fact, for any system more complicated than a single hydrogen atom, there is no known exact solution for the spectrum of the Hamiltonian. In this case, a trial wave function, Ψ {\displaystyle \Psi } , is tested on the system. This trial function is selected to meet boundary conditions (and any other physical constraints). The exact function is not known; the trial function contains one or more adjustable parameters, which are varied to find a lowest energy configuration. It can be shown that the ground state energy, E 0 {\displaystyle E_{0}} , satisfies an inequality: E 0 ≤ ⟨ Ψ | H ^ | Ψ ⟩ ⟨ Ψ | Ψ ⟩ . {\displaystyle E_{0}\leq {\frac {\langle \Psi |{\hat {H}}|\Psi \rangle }{\langle \Psi |\Psi \rangle }}.} That is, the ground-state energy is less than this value. The trial wave-function will always give an expectation value larger than or equal to the ground-energy. If the trial wave function is known to be orthogonal to the ground state, then it will provide a boundary for the energy of some excited state. The Ritz ansatz function is a linear combination of N known basis functions { Ψ i } {\displaystyle \left\lbrace \Psi _{i}\right\rbrace } , parametrized by unknown coefficients: Ψ = ∑ i = 1 N c i Ψ i . {\displaystyle \Psi =\sum _{i=1}^{N}c_{i}\Psi _{i}.} With a known Hamiltonian, we can write its expected value as ε = ⟨ ∑ i = 1 N c i Ψ i | H ^ | ∑ i = 1 N c i Ψ i ⟩ ⟨ ∑ i = 1 N c i Ψ i | ∑ i = 1 N c i Ψ i ⟩ = ∑ i = 1 N ∑ j = 1 N c i ∗ c j H i j ∑ i = 1 N ∑ j = 1 N c i ∗ c j S i j ≡ A B . {\displaystyle \varepsilon ={\frac {\left\langle \displaystyle \sum _{i=1}^{N}c_{i}\Psi _{i}\right|{\hat {H}}\left|\displaystyle \sum _{i=1}^{N}c_{i}\Psi _{i}\right\rangle }{\left\langle \left.\displaystyle \sum _{i=1}^{N}c_{i}\Psi _{i}\right|\displaystyle \sum _{i=1}^{N}c_{i}\Psi _{i}\right\rangle }}={\frac {\displaystyle \sum _{i=1}^{N}\displaystyle \sum _{j=1}^{N}c_{i}^{*}c_{j}H_{ij}}{\displaystyle \sum _{i=1}^{N}\displaystyle \sum _{j=1}^{N}c_{i}^{*}c_{j}S_{ij}}}\equiv {\frac {A}{B}}.} The basis functions are usually not orthogonal, so that the overlap matrix S has nonzero nondiagonal elements. Either { c i } {\displaystyle \left\lbrace c_{i}\right\rbrace } or { c i ∗ } {\displaystyle \left\lbrace c_{i}^{*}\right\rbrace } (the conjugation of the first) can be used to minimize the expectation value. For instance, by making the partial derivatives of ε {\displaystyle \varepsilon } over { c i ∗ } {\displaystyle \left\lbrace c_{i}^{*}\right\rbrace } zero, the following equality is obtained for every k = 1, 2, ..., N: ∂ ε ∂ c k ∗ = ∑ j = 1 N c j ( H k j − ε S k j ) B = 0 , {\displaystyle {\frac {\partial \varepsilon }{\partial c_{k}^{*}}}={\frac {\displaystyle \sum _{j=1}^{N}c_{j}(H_{kj}-\varepsilon S_{kj})}{B}}=0,} which leads to a set of N secular equations: ∑ j = 1 N c j ( H k j − ε S k j ) = 0 for k = 1 , 2 , … , N . {\displaystyle \sum _{j=1}^{N}c_{j}\left(H_{kj}-\varepsilon S_{kj}\right)=0\quad {\text{for}}\quad k=1,2,\dots ,N.} In the above equations, energy ε {\displaystyle \varepsilon } and the coefficients { c j } {\displaystyle \left\lbrace c_{j}\right\rbrace } are unknown. With respect to c, this is a homogeneous set of linear equations, which has a solution when the determinant of the coefficients to these unknowns is zero: det ( H − ε S ) = 0 , {\displaystyle \det \left(H-\varepsilon S\right)=0,} which in turn is true only for N values of ε {\displaystyle \varepsilon } . Furthermore, since the Hamiltonian is a hermitian operator, the H matrix is also hermitian and the values of ε i {\displaystyle \varepsilon _{i}} will be real. The lowest value among ε i {\displaystyle \varepsilon _{i}} (i=1,2,..,N), ε 0 {\displaystyle \varepsilon _{0}} , will be the best approximation to the ground state for the basis functions used. The remaining N-1 energies are estimates of excited state energies. An approximation for the wave function of state i can be obtained by finding the coefficients { c j } {\displaystyle \left\lbrace c_{j}\right\rbrace } from the corresponding secular equation. === In mechanical engineering === The Rayleigh–Ritz method is often used in mechanical engineering for finding the approximate real resonant frequencies of multi degree of freedom systems, such as spring mass systems or flywheels on a shaft with varying cross section. It is an extension of Rayleigh's method. It can also be used for finding buckling loads and post-buckling behaviour for columns. Consider the case whereby we want to find the resonant frequency of oscillation of a system. First, write the oscillation in the form, y ( x , t ) = Y ( x ) cos ⁡ ω t {\displaystyle y(x,t)=Y(x)\cos \omega t} with an unknown mode shape Y ( x ) {\displaystyle Y(x)} . Next, find the total energy of the system, consisting of a kinetic energy term and a potential energy term. The kinetic energy term involves the square of the time derivative of y ( x , t ) {\displaystyle y(x,t)} and thus gains a factor of ω 2 {\displaystyle \omega ^{2}} . Thus, we can calculate the total energy of the system and express it in the following form: E = T + V ≡ A [ Y ( x ) ] ω 2 sin 2 ⁡ ω t + B [ Y ( x ) ] cos 2 ⁡ ω t {\displaystyle E=T+V\equiv A[Y(x)]\omega ^{2}\sin ^{2}\omega t+B[Y(x)]\cos ^{2}\omega t} By conservation of energy, the average kinetic energy must be equal to the average potential energy. Thus, ω 2 = B [ Y ( x ) ] A [ Y ( x ) ] = R [ Y ( x ) ] {\displaystyle \omega ^{2}={\frac {B[Y(x)]}{A[Y(x)]}}=R[Y(x)]} which is also known as the Rayleigh quotient. Thus, if we knew the mode shape Y ( x ) {\displaystyle Y(x)} , we would be able to calculate A [ Y ( x ) ] {\displaystyle A[Y(x)]} and B [ Y ( x ) ] {\displaystyle B[Y(x)]} , and in turn get the eigenfrequency. However, we do not yet know the mode shape. In order to find this, we can approximate Y ( x ) {\displaystyle Y(x)} as a combination of a few approximating functions Y i ( x ) {\displaystyle Y_{i}(x)} Y ( x ) = ∑ i = 1 N c i Y i ( x ) {\displaystyle Y(x)=\sum _{i=1}^{N}c_{i}Y_{i}(x)} where c 1 , c 2 , ⋯ , c N {\displaystyle c_{1},c_{2},\cdots ,c_{N}} are constants to be determined. In general, if we choose a random set of c 1 , c 2 , ⋯ , c N {\displaystyle c_{1},c_{2},\cdots ,c_{N}} , it will describe a superposition of the actual eigenmodes of the system. However, if we seek c 1 , c 2 , ⋯ , c N {\displaystyle c_{1},c_{2},\cdots ,c_{N}} such that the eigenfrequency ω 2 {\displaystyle \omega ^{2}} is minimised, then the mode described by this set of c 1 , c 2 , ⋯ , c N {\displaystyle c_{1},c_{2},\cdots ,c_{N}} will be close to the lowest possible actual eigenmode of the system. Thus, this finds the lowest eigenfrequency. If we find eigenmodes orthogonal to this approximated lowest eigenmode, we can approximately find the next few eigenfrequencies as well. In general, we can express A [ Y ( x ) ] {\displaystyle A[Y(x)]} and B [ Y ( x ) ] {\displaystyle B[Y(x)]} as a collection of terms quadratic in the coefficients c i {\displaystyle c_{i}} : B [ Y ( x ) ] = ∑ i ∑ j c i c j K i j = c T K c {\displaystyle B[Y(x)]=\sum _{i}\sum _{j}c_{i}c_{j}K_{ij}=\mathbf {c} ^{\mathsf {T}}K\mathbf {c} } A [ Y ( x ) ] = ∑ i ∑ j c i c j M i j = c T M c {\displaystyle A[Y(x)]=\sum _{i}\sum _{j}c_{i}c_{j}M_{ij}=\mathbf {c} ^{\mathsf {T}}M\mathbf {c} } where K {\displaystyle K} and M {\displaystyle M} are the stiffness matrix and mass matrix of a discrete system respectively. The minimization of ω 2 {\displaystyle \omega ^{2}} becomes: ∂ ω 2 ∂ c i = ∂ ∂ c i c T K c c T M c = 0 {\displaystyle {\frac {\partial \omega ^{2}}{\partial c_{i}}}={\frac {\partial }{\partial c_{i}}}{\frac {\mathbf {c} ^{\mathsf {T}}K\mathbf {c} }{\mathbf {c} ^{\mathsf {T}}M\mathbf {c} }}=0} Solving this, c T M c ∂ c T K c ∂ c − c T K c ∂ c T M c ∂ c = 0 {\displaystyle \mathbf {c} ^{\mathsf {T}}M\mathbf {c} {\frac {\partial \mathbf {c} ^{\mathsf {T}}K\mathbf {c} }{\partial \mathbf {c} }}-\mathbf {c} ^{\mathsf {T}}K\mathbf {c} {\frac {\partial \mathbf {c} ^{\mathsf {T}}M\mathbf {c} }{\partial \mathbf {c} }}=0} K c − c T K c c T M c M c = 0 {\displaystyle K\mathbf {c} -{\frac {\mathbf {c} ^{\mathsf {T}}K\mathbf {c} }{\mathbf {c} ^{\mathsf {T}}M\mathbf {c} }}M\mathbf {c} =\mathbf {0} } K c − ω 2 M c = 0 {\displaystyle K\mathbf {c} -\omega ^{2}M\mathbf {c} =\mathbf {0} } For a non-trivial solution of c, we require determinant of the matrix coefficient of c to be zero. det ( K − ω 2 M ) = 0 {\displaystyle \det(K-\omega ^{2}M)=0} This gives a solution for the first N eigenfrequencies and eigenmodes of the system, with N being the number of approximating functions. === Simple case of double spring-mass system === The following discussion uses the simplest case, where the system has two lumped springs and two lumped masses, and only two mode shapes are assumed. Hence M = [m1, m2] and K = [k1, k2]. A mode shape is assumed for the system, with two terms, one of which is weighted by a factor B, e.g. Y = [1, 1] + B[1, −1]. Simple harmonic motion theory says that the velocity at the time when deflection is zero, is the angular frequency ω {\displaystyle \omega } times the deflection (y) at time of maximum deflection. In this example the kinetic energy (KE) for each mass is 1 2 ω 2 Y 1 2 m 1 {\textstyle {\frac {1}{2}}\omega ^{2}Y_{1}^{2}m_{1}} etc., and the potential energy (PE) for each spring is 1 2 k 1 Y 1 2 {\textstyle {\frac {1}{2}}k_{1}Y_{1}^{2}} etc. We also know that without damping, the maximal KE equals the maximal PE. Thus, ∑ i = 1 2 ( 1 2 ω 2 Y i 2 M i ) = ∑ i = 1 2 ( 1 2 K i Y i 2 ) {\displaystyle \sum _{i=1}^{2}\left({\frac {1}{2}}\omega ^{2}Y_{i}^{2}M_{i}\right)=\sum _{i=1}^{2}\left({\frac {1}{2}}K_{i}Y_{i}^{2}\right)} The overall amplitude of the mode shape cancels out from each side, always. That is, the actual size of the assumed deflection does not matter, just the mode shape. Mathematical manipulations then obtain an expression for ω {\displaystyle \omega } , in terms of B, which can be differentiated with respect to B, to find the minimum, i.e. when d ω / d B = 0 {\displaystyle d\omega /dB=0} . This gives the value of B for which ω {\displaystyle \omega } is lowest. This is an upper bound solution for ω {\displaystyle \omega } if ω {\displaystyle \omega } is hoped to be the predicted fundamental frequency of the system because the mode shape is assumed, but we have found the lowest value of that upper bound, given our assumptions, because B is used to find the optimal 'mix' of the two assumed mode shape functions. There are many tricks with this method, the most important is to try and choose realistic assumed mode shapes. For example, in the case of beam deflection problems it is wise to use a deformed shape that is analytically similar to the expected solution. A quartic may fit most of the easy problems of simply linked beams even if the order of the deformed solution may be lower. The springs and masses do not have to be discrete, they can be continuous (or a mixture), and this method can be easily used in a spreadsheet to find the natural frequencies of quite complex distributed systems, if you can describe the distributed KE and PE terms easily, or else break the continuous elements up into discrete parts. This method could be used iteratively, adding additional mode shapes to the previous best solution, or you can build up a long expression with many Bs and many mode shapes, and then differentiate them partially. === In dynamical systems === The Koopman operator allows a finite-dimensional nonlinear system to be encoded as an infinite-dimensional linear system. In general, both of these problems are difficult to solve, but for the latter we can use the Ritz-Galerkin method to approximate a solution. == The relationship with the finite element method == In the language of the finite element method, the matrix H k j {\displaystyle H_{kj}} is precisely the stiffness matrix of the Hamiltonian in the piecewise linear element space, and the matrix S k j {\displaystyle S_{kj}} is the mass matrix. In the language of linear algebra, the value ϵ {\displaystyle \epsilon } is an eigenvalue of the discretized Hamiltonian, and the vector c {\displaystyle c} is a discretized eigenvector. == See also == Rayleigh quotient Arnoldi iteration Sturm–Liouville theory Hilbert space Galerkin method == Notes and references == Ritz, Walther (1909). "Über eine neue Methode zur Lösung gewisser Variationsprobleme der mathematischen Physik". Journal für die Reine und Angewandte Mathematik. 135: 1–61. doi:10.1515/crll.1909.135.1. MacDonald, J. K. (1933). "Successive Approximations by the Rayleigh-Ritz Variation Method". Phys. Rev. 43 (10): 830–833. Bibcode:1933PhRv...43..830M. doi:10.1103/PhysRev.43.830. == External links == Course on Calculus of Variations, has a section on Rayleigh–Ritz method. Ritz method in the Encyclopedia of Mathematics Gander, Martin J.; Wanner, Gerhard (2012). "From Euler, Ritz, and Galerkin to Modern Computing". SIAM Review. 54 (4): 627–666. CiteSeerX 10.1.1.297.5697. doi:10.1137/100804036.
Wikipedia/Rayleigh–Ritz_method
Physics Today is the membership magazine of the American Institute of Physics. First published in May 1948, it is issued on a monthly schedule, and is provided to the members of ten physics societies, including the American Physical Society. It is also available to non-members as a paid annual subscription. The magazine informs readers about important developments in overview articles written by experts, shorter review articles written internally by staff, and also discusses issues and events of importance to the science community in politics, education, and other fields. The magazine provides a historical resource of events associated with physics. For example it discussed debunking the physics of the Star Wars program of the 1980s, and the state of physics in China and the Soviet Union during the 1950s and 1970s. According to the Journal Citation Reports, the journal has a 2017 impact factor of 4.370. == References == == External links == Official website === Archival collections === AIP Physics Today Division miscellaneous publications, 1955–2006, Niels Bohr Library & Archives AIP Physics Today Division records of Irwin Goodwin, 1983–1993, Niels Bohr Library & Archives AIP Physics Today Division records, 1948–1971, Niels Bohr Library & Archives AIP Physics Today division Bertram Schwarzschild Nobel Prize files, 1954–2013, Niels Bohr Library & Archives
Wikipedia/Physics_Today
Contemporary Physics is a peer-reviewed scientific journal publishing introductory articles on important recent developments in physics. Editorial screening and peer review is carried out by members of the editorial board. == Overview == Contemporary Physics has been published by Taylor & Francis since 1959 and publishes four issues per year. The subjects covered by this journal are: astrophysics, atomic and nuclear physics, chemical physics, computational physics, condensed matter physics, environmental physics, experimental physics, general physics, particle & high energy physics, plasma physics, space science, and theoretical physics. == Aims == The journal publishes introductory review articles on a range of recent developments in physics and intends to be of particular use to undergraduates, teachers and lecturers, and those starting postgraduate studies. Contemporary Physics also contains a major section devoted to standard book reviews and essay reviews which review books in the context of the general aspects of a field which have a wide appeal. == Abstracting and indexing == According to the Thomson Reuters Journal Citation Reports, the journal has a 2020 impact factor of 5.185. Contemporary Physics is abstracted and indexed in Inspec, EBSCO Publishing, Current Contents, Current Mathematical Publications, Mathematical Reviews, SciSearch, and SciBase. == Notable authors == Contemporary Physics has attracted articles from a number of prominent scientists, including: Jim Al-Khalili, OBE Subrahmanyan Chandrasekhar, FRS Leon Cooper Otto Robert Frisch, FRS Vitaly Ginzburg, FRS Stephen Hawking, CBE, FRS Sir Peter Knight, FRS (current Editor-in-Chief) Sir Anthony James Leggett, KBE, FRS Sir Nevill Francis Mott, FRS Sir John Pendry, FRS FInstP Abdus Salam, KBE Arthur Leonard Schawlow == References == == External links == Official website
Wikipedia/Contemporary_Physics
The Heaviside step function, or the unit step function, usually denoted by H or θ (but sometimes u, 1 or 𝟙), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value H(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one. The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as 1. == Formulation == Taking the convention that H(0) = 1, the Heaviside function may be defined as: a piecewise function: H ( x ) := { 1 , x ≥ 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x\geq 0\\0,&x<0\end{cases}}} using the Iverson bracket notation: H ( x ) := [ x ≥ 0 ] {\displaystyle H(x):=[x\geq 0]} an indicator function: H ( x ) := 1 x ≥ 0 = 1 R + ( x ) {\displaystyle H(x):=\mathbf {1} _{x\geq 0}=\mathbf {1} _{\mathbb {R} _{+}}(x)} For the alternative convention that H(0) = ⁠1/2⁠, it may be expressed as: a piecewise function: H ( x ) := { 1 , x > 0 1 2 , x = 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x>0\\{\frac {1}{2}},&x=0\\0,&x<0\end{cases}}} a linear transformation of the sign function, H ( x ) := 1 2 ( sgn x + 1 ) {\displaystyle H(x):={\frac {1}{2}}\left({\mbox{sgn}}\,x+1\right)} the arithmetic mean of two Iverson brackets, H ( x ) := [ x ≥ 0 ] + [ x > 0 ] 2 {\displaystyle H(x):={\frac {[x\geq 0]+[x>0]}{2}}} a one-sided limit of the two-argument arctangent H ( x ) =: lim ϵ → 0 + atan2 ( ϵ , − x ) π {\displaystyle H(x)=:\lim _{\epsilon \to 0^{+}}{\frac {{\mbox{atan2}}(\epsilon ,-x)}{\pi }}} a hyperfunction H ( x ) =: ( 1 − 1 2 π i log ⁡ z , − 1 2 π i log ⁡ z ) {\displaystyle H(x)=:\left(1-{\frac {1}{2\pi i}}\log z,\ -{\frac {1}{2\pi i}}\log z\right)} or equivalently H ( x ) =: ( − log − z 2 π i , − log − z 2 π i ) {\displaystyle H(x)=:\left(-{\frac {\log -z}{2\pi i}},-{\frac {\log -z}{2\pi i}}\right)} where log z is the principal value of the complex logarithm of z Other definitions which are undefined at H(0) include: a piecewise function: H ( x ) := { 1 , x > 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x>0\\0,&x<0\end{cases}}} the derivative of the ramp function: H ( x ) := d d x max { x , 0 } for x ≠ 0 {\displaystyle H(x):={\frac {d}{dx}}\max\{x,0\}\quad {\mbox{for }}x\neq 0} in terms of the absolute value function as H ( x ) = x + | x | 2 x {\displaystyle H(x)={\frac {x+|x|}{2x}}} == Relationship with Dirac delta == The Dirac delta function is the weak derivative of the Heaviside function: δ ( x ) = d d x H ( x ) . {\displaystyle \delta (x)={\frac {d}{dx}}H(x).} Hence the Heaviside function can be considered to be the integral of the Dirac delta function. This is sometimes written as H ( x ) := ∫ − ∞ x δ ( s ) d s {\displaystyle H(x):=\int _{-\infty }^{x}\delta (s)\,ds} although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to give meaning to integrals involving δ. In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See Constant random variable.) == Analytic approximations == Approximations to the Heaviside step function are of use in biochemistry and neuroscience, where logistic approximations of step functions (such as the Hill and the Michaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals. For a smooth approximation to the step function, one can use the logistic function H ( x ) ≈ 1 2 + 1 2 tanh ⁡ k x = 1 1 + e − 2 k x , {\displaystyle H(x)\approx {\tfrac {1}{2}}+{\tfrac {1}{2}}\tanh kx={\frac {1}{1+e^{-2kx}}},} where a larger k corresponds to a sharper transition at x = 0. If we take H(0) = ⁠1/2⁠, equality holds in the limit: H ( x ) = lim k → ∞ 1 2 ( 1 + tanh ⁡ k x ) = lim k → ∞ 1 1 + e − 2 k x . {\displaystyle H(x)=\lim _{k\to \infty }{\tfrac {1}{2}}(1+\tanh kx)=\lim _{k\to \infty }{\frac {1}{1+e^{-2kx}}}.} There are many other smooth, analytic approximations to the step function. Among the possibilities are: H ( x ) = lim k → ∞ ( 1 2 + 1 π arctan ⁡ k x ) H ( x ) = lim k → ∞ ( 1 2 + 1 2 erf ⁡ k x ) {\displaystyle {\begin{aligned}H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{\pi }}\arctan kx\right)\\H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{2}}\operatorname {erf} kx\right)\end{aligned}}} These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, then convergence holds in the sense of distributions too.) In general, any cumulative distribution function of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are cumulative distribution functions of common probability distributions: the logistic, Cauchy and normal distributions, respectively. == Non-Analytic approximations == Approximations to the Heaviside step function could be made through Smooth transition function like 1 ≤ m → ∞ {\displaystyle 1\leq m\to \infty } : f ( x ) = { 1 2 ( 1 + tanh ⁡ ( m 2 x 1 − x 2 ) ) , | x | < 1 1 , x ≥ 1 0 , x ≤ − 1 {\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {1}{2}}\left(1+\tanh \left(m{\frac {2x}{1-x^{2}}}\right)\right)},&|x|<1\\\\1,&x\geq 1\\0,&x\leq -1\end{cases}}\end{aligned}}} == Integral representations == Often an integral representation of the Heaviside step function is useful: H ( x ) = lim ε → 0 + − 1 2 π i ∫ − ∞ ∞ 1 τ + i ε e − i x τ d τ = lim ε → 0 + 1 2 π i ∫ − ∞ ∞ 1 τ − i ε e i x τ d τ . {\displaystyle {\begin{aligned}H(x)&=\lim _{\varepsilon \to 0^{+}}-{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau +i\varepsilon }}e^{-ix\tau }d\tau \\&=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau -i\varepsilon }}e^{ix\tau }d\tau .\end{aligned}}} where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate. == Zero argument == Since H is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of H(0). Indeed when H is considered as a distribution or an element of L∞ (see Lp space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value. H(0) = ⁠1/2⁠ is often used since the graph then has rotational symmetry; put another way, H − ⁠1/2⁠ is then an odd function. In this case the following relation with the sign function holds for all x: H ( x ) = 1 2 ( 1 + sgn ⁡ x ) . {\displaystyle H(x)={\tfrac {1}{2}}(1+\operatorname {sgn} x).} Also, H(x) + H(-x) = 1 for all x. H(0) = 1 is used when H needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case H is the indicator function of a closed semi-infinite interval: H ( x ) = 1 [ 0 , ∞ ) ( x ) . {\displaystyle H(x)=\mathbf {1} _{[0,\infty )}(x).} The corresponding probability distribution is the degenerate distribution. H(0) = 0 is used when H needs to be left-continuous. In this case H is an indicator function of an open semi-infinite interval: H ( x ) = 1 ( 0 , ∞ ) ( x ) . {\displaystyle H(x)=\mathbf {1} _{(0,\infty )}(x).} In functional-analysis contexts from optimization and game theory, it is often useful to define the Heaviside function as a set-valued function to preserve the continuity of the limiting functions and ensure the existence of certain solutions. In these cases, the Heaviside function returns a whole interval of possible solutions, H(0) = [0,1]. == Discrete form == An alternative form of the unit step, defined instead as a function H : Z → R {\displaystyle H:\mathbb {Z} \rightarrow \mathbb {R} } (that is, taking in a discrete variable n), is: H [ n ] = { 0 , n < 0 , 1 , n ≥ 0 , {\displaystyle H[n]={\begin{cases}0,&n<0,\\1,&n\geq 0,\end{cases}}} or using the half-maximum convention: H [ n ] = { 0 , n < 0 , 1 2 , n = 0 , 1 , n > 0 , {\displaystyle H[n]={\begin{cases}0,&n<0,\\{\tfrac {1}{2}},&n=0,\\1,&n>0,\end{cases}}} where n is an integer. If n is an integer, then n < 0 must imply that n ≤ −1, while n > 0 must imply that the function attains unity at n = 1. Therefore the "step function" exhibits ramp-like behavior over the domain of [−1, 1], and cannot authentically be a step function, using the half-maximum convention. Unlike the continuous case, the definition of H[0] is significant. The discrete-time unit impulse is the first difference of the discrete-time step δ [ n ] = H [ n ] − H [ n − 1 ] . {\displaystyle \delta [n]=H[n]-H[n-1].} This function is the cumulative summation of the Kronecker delta: H [ n ] = ∑ k = − ∞ n δ [ k ] {\displaystyle H[n]=\sum _{k=-\infty }^{n}\delta [k]} where δ [ k ] = δ k , 0 {\displaystyle \delta [k]=\delta _{k,0}} is the discrete unit impulse function. == Antiderivative and derivative == The ramp function is an antiderivative of the Heaviside step function: ∫ − ∞ x H ( ξ ) d ξ = x H ( x ) = max { 0 , x } . {\displaystyle \int _{-\infty }^{x}H(\xi )\,d\xi =xH(x)=\max\{0,x\}\,.} The distributional derivative of the Heaviside step function is the Dirac delta function: d H ( x ) d x = δ ( x ) . {\displaystyle {\frac {dH(x)}{dx}}=\delta (x)\,.} == Fourier transform == The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have H ^ ( s ) = lim N → ∞ ∫ − N N e − 2 π i x s H ( x ) d x = 1 2 ( δ ( s ) − i π p . v . ⁡ 1 s ) . {\displaystyle {\hat {H}}(s)=\lim _{N\to \infty }\int _{-N}^{N}e^{-2\pi ixs}H(x)\,dx={\frac {1}{2}}\left(\delta (s)-{\frac {i}{\pi }}\operatorname {p.v.} {\frac {1}{s}}\right).} Here p.v.⁠1/s⁠ is the distribution that takes a test function φ to the Cauchy principal value of ∫ − ∞ ∞ φ ( s ) s d s {\displaystyle \textstyle \int _{-\infty }^{\infty }{\frac {\varphi (s)}{s}}\,ds} . The limit appearing in the integral is also taken in the sense of (tempered) distributions. == Unilateral Laplace transform == The Laplace transform of the Heaviside step function is a meromorphic function. Using the unilateral Laplace transform we have: H ^ ( s ) = lim N → ∞ ∫ 0 N e − s x H ( x ) d x = lim N → ∞ ∫ 0 N e − s x d x = 1 s {\displaystyle {\begin{aligned}{\hat {H}}(s)&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}H(x)\,dx\\&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}\,dx\\&={\frac {1}{s}}\end{aligned}}} When the bilateral transform is used, the integral can be split in two parts and the result will be the same. == See also == == References == == External links == Digital Library of Mathematical Functions, NIST, [1]. Berg, Ernst Julius (1936). "Unit function". Heaviside's Operational Calculus, as applied to Engineering and Physics. McGraw-Hill Education. p. 5. Calvert, James B. (2002). "Heaviside, Laplace, and the Inversion Integral". University of Denver. Davies, Brian (2002). "Heaviside step function". Integral Transforms and their Applications (3rd ed.). Springer. p. 28. Duff, George F. D.; Naylor, D. (1966). "Heaviside unit function". Differential Equations of Applied Mathematics. John Wiley & Sons. p. 42.
Wikipedia/Heaviside_function
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable. Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. The concepts and techniques found in calculus have diverse applications in science, engineering, and other branches of mathematics. == Etymology == In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis. In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin. In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus. == History == Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India. === Ancient precursors === ==== Egypt ==== Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulae are simple instructions, with no indication as to how they were obtained. ==== Greece ==== Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus (c. 390–337 BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes. During the Hellenistic period, this method was further developed by Archimedes (c. 287 – c. 212 BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines. ==== China ==== The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere. === Medieval === ==== Middle East ==== In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. He determined the equations to calculate the area enclosed by the curve represented by y = x k {\displaystyle y=x^{k}} (which translates to the integral ∫ x k d x {\displaystyle \int x^{k}\,dx} in contemporary notation), for any given non-negative integer value of k {\displaystyle k} .He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. ==== India ==== Bhāskara II (c. 1114–1185) was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if x ≈ y {\displaystyle x\approx y} then sin ⁡ ( y ) − sin ⁡ ( x ) ≈ ( y − x ) cos ⁡ ( y ) . {\displaystyle \sin(y)-\sin(x)\approx (y-x)\cos(y).} This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus. They studied series equivalent to the Maclaurin expansions of ⁠ sin ⁡ ( x ) {\displaystyle \sin(x)} ⁠, ⁠ cos ⁡ ( x ) {\displaystyle \cos(x)} ⁠, and ⁠ arctan ⁡ ( x ) {\displaystyle \arctan(x)} ⁠ more than two hundred years before their introduction in Europe. According to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today". === Modern === Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse. Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670. The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable. These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation. Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today.: 51–52  The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series. When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.: 100  The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815. Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi. === Foundations === In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today. Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis. In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever. Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis. === Significance === While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work, The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking. Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization.: 341–453  Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure.: 685–700  More advanced applications include power series and Fourier series. Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes. == Principles == === Limits and infinitesimals === Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols d x {\displaystyle dx} and d y {\displaystyle dy} were taken to be infinitesimal, and the derivative d y / d x {\displaystyle dy/dx} was their ratio. The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals. === Differential calculus === Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.: 32  In more explicit terms the "doubling function" may be denoted by g(x) = 2x and the "squaring function" by f(x) = x2. The "derivative" now takes the function f(x), defined by the expression "x2", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out. In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called f is denoted by f′, pronounced "f prime" or "f dash". For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above). If the input of the function represents time, then the derivative represents change concerning time. For example, if f is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.: 18–20  If a function is linear (that is if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and: m = rise run = change in y change in x = Δ y Δ x . {\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.} This gives an exact value for the slope of a straight line.: 6  If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is m = f ( a + h ) − f ( a ) ( a + h ) − a = f ( a + h ) − f ( a ) h . {\displaystyle m={\frac {f(a+h)-f(a)}{(a+h)-a}}={\frac {f(a+h)-f(a)}{h}}.} This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The second line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero: lim h → 0 f ( a + h ) − f ( a ) h . {\displaystyle \lim _{h\to 0}{f(a+h)-f(a) \over {h}}.} Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.: 61–63  Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function. f ′ ( 3 ) = lim h → 0 ( 3 + h ) 2 − 3 2 h = lim h → 0 9 + 6 h + h 2 − 9 h = lim h → 0 6 h + h 2 h = lim h → 0 ( 6 + h ) = 6 {\displaystyle {\begin{aligned}f'(3)&=\lim _{h\to 0}{(3+h)^{2}-3^{2} \over {h}}\\&=\lim _{h\to 0}{9+6h+h^{2}-9 \over {h}}\\&=\lim _{h\to 0}{6h+h^{2} \over {h}}\\&=\lim _{h\to 0}(6+h)\\&=6\end{aligned}}} The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.: 63  === Leibniz notation === A common notation, introduced by Leibniz, for the derivative in the example above is y = x 2 d y d x = 2 x . {\displaystyle {\begin{aligned}y&=x^{2}\\{\frac {dy}{dx}}&=2x.\end{aligned}}} In an approach based on limits, the symbol ⁠dy/ dx⁠ is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.: 74  Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of ⁠d/ dx⁠ as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example: d d x ( x 2 ) = 2 x . {\displaystyle {\frac {d}{dx}}(x^{2})=2x.} In this usage, the dx in the denominator is read as "with respect to x".: 79  Another example of correct notation could be: g ( t ) = t 2 + 2 t + 4 d d t g ( t ) = 2 t + 2 {\displaystyle {\begin{aligned}g(t)&=t^{2}+2t+4\\{d \over dt}g(t)&=2t+2\end{aligned}}} Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative. === Integral calculus === Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration.: 508  The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative.: 163–165  F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.: 282  A motivating example is the distance traveled in a given time.: 153  If the speed is constant, only multiplication is needed: D i s t a n c e = S p e e d ⋅ T i m e {\displaystyle \mathrm {Distance} =\mathrm {Speed} \cdot \mathrm {Time} } But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.: 535  This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If f(x) represents speed as it varies over time, the distance traveled between the times represented by a and b is the area of the region between f(x) and the x-axis, between x = a and x = b. To approximate that area, an intuitive method would be to divide up the distance between a and b into several equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as Δx approaches zero.: 512–522  The symbol of integration is ∫ {\displaystyle \int } , an elongated S chosen to suggest summation.: 529  The definite integral is written as: ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width Δx becomes the infinitesimally small dx.: 44  The indefinite integral, or antiderivative, is written: ∫ f ( x ) d x . {\displaystyle \int f(x)\,dx.} Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.: 326  Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter is given by: ∫ 2 x d x = x 2 + C . {\displaystyle \int 2x\,dx=x^{2}+C.} The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.: 135  === Fundamental theorem === The fundamental theorem of calculus states that differentiation and integration are inverse operations.: 290  More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).} Furthermore, for every x in the interval (a, b), d d x ∫ a x f ( t ) d t = f ( x ) . {\displaystyle {\frac {d}{dx}}\int _{a}^{x}f(t)\,dt=f(x).} This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.: 351–352  == Applications == Calculus is used in every branch of the physical sciences,: 1  actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function.: 37  In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments. Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus.: 52–55  Chemistry also uses calculus in determining reaction rates: 599  and in studying radioactive decay.: 814  In biology, population dynamics starts with reproduction and death rates to model population changes.: 631  Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows. In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.: 387  == See also == Glossary of calculus List of calculus topics List of derivatives and integrals in alternative calculi List of differentiation identities Publications in calculus Table of integrals == References == == Further reading == == External links == "Calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Calculus". MathWorld. Topics on Calculus at PlanetMath. Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF Calculus on In Our Time at the BBC Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis The Role of Calculus in College Mathematics Archived 26 July 2021 at the Wayback Machine from ERICDigests.org OpenCourseWare Calculus from the Massachusetts Institute of Technology Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel. Daniel Kleitman, MIT. "Calculus for Beginners and Artists". Calculus training materials at imomath.com (in English and Arabic) The Excursion of Calculus, 1772
Wikipedia/Calculus_of_functions
In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations. == Statement == Suppose U is an open set in the Euclidean space Rn, and suppose that f0, f1, ... is a sequence of smooth functions on U. If I is any open interval in R containing 0 (possibly I = R), then there exists a smooth function F(t, x) defined on I×U, such that ∂ k F ∂ t k | ( 0 , x ) = f k ( x ) , {\displaystyle \left.{\frac {\partial ^{k}F}{\partial t^{k}}}\right|_{(0,x)}=f_{k}(x),} for k ≥ 0 and x in U. == Proof == Proofs of Borel's lemma can be found in many text books on analysis, including Golubitsky & Guillemin (1974) and Hörmander (1990), from which the proof below is taken. Note that it suffices to prove the result for a small interval I = (−ε,ε), since if ψ(t) is a smooth bump function with compact support in (−ε,ε) equal identically to 1 near 0, then ψ(t) ⋅ F(t, x) gives a solution on R × U. Similarly using a smooth partition of unity on Rn subordinate to a covering by open balls with centres at δ⋅Zn, it can be assumed that all the fm have compact support in some fixed closed ball C. For each m, let F m ( t , x ) = t m m ! ⋅ ψ ( t ε m ) ⋅ f m ( x ) , {\displaystyle F_{m}(t,x)={t^{m} \over m!}\cdot \psi \left({t \over \varepsilon _{m}}\right)\cdot f_{m}(x),} where εm is chosen sufficiently small that ‖ ∂ α F m ‖ ∞ ≤ 2 − m {\displaystyle \|\partial ^{\alpha }F_{m}\|_{\infty }\leq 2^{-m}} for |α| < m. These estimates imply that each sum ∑ m ≥ 0 ∂ α F m {\displaystyle \sum _{m\geq 0}\partial ^{\alpha }F_{m}} is uniformly convergent and hence that F = ∑ m ≥ 0 F m {\displaystyle F=\sum _{m\geq 0}F_{m}} is a smooth function with ∂ α F = ∑ m ≥ 0 ∂ α F m . {\displaystyle \partial ^{\alpha }F=\sum _{m\geq 0}\partial ^{\alpha }F_{m}.} By construction ∂ t m F ( t , x ) | t = 0 = f m ( x ) . {\displaystyle \partial _{t}^{m}F(t,x)|_{t=0}=f_{m}(x).} Note: Exactly the same construction can be applied, without the auxiliary space U, to produce a smooth function on the interval I for which the derivatives at 0 form an arbitrary sequence. == See also == Non-analytic smooth function § Application to Taylor series == References == Erdélyi, A. (1956), Asymptotic expansions, Dover Publications, pp. 22–25, ISBN 0486603180{{citation}}: CS1 maint: ignored ISBN errors (link) Golubitsky, M.; Guillemin, V. (1974), Stable mappings and their singularities, Graduate Texts in Mathematics, vol. 14, Springer-Verlag, ISBN 0-387-90072-1 Hörmander, Lars (1990), The analysis of linear partial differential operators, I. Distribution theory and Fourier analysis (2nd ed.), Springer-Verlag, p. 16, ISBN 3-540-52343-X This article incorporates material from Borel lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Borel's_lemma
In mathematics, the term "characteristic function" can refer to any of several distinct concepts: The indicator function of a subset, that is the function 1 A : X → { 0 , 1 } , {\displaystyle \mathbf {1} _{A}\colon X\to \{0,1\},} which for a given subset A of X, has value 1 at points of A and 0 at points of X − A. The characteristic function in convex analysis, closely related to the indicator function of a set: χ A ( x ) := { 0 , x ∈ A ; + ∞ , x ∉ A . {\displaystyle \chi _{A}(x):={\begin{cases}0,&x\in A;\\+\infty ,&x\not \in A.\end{cases}}} In probability theory, the characteristic function of any probability distribution on the real line is given by the following formula, where X is any random variable with the distribution in question: φ X ( t ) = E ⁡ ( e i t X ) , {\displaystyle \varphi _{X}(t)=\operatorname {E} \left(e^{itX}\right),} where E {\displaystyle \operatorname {E} } denotes expected value. For multivariate distributions, the product tX is replaced by a scalar product of vectors. The characteristic function of a cooperative game in game theory. The characteristic polynomial in linear algebra. The characteristic state function in statistical mechanics. The Euler characteristic, a topological invariant. The receiver operating characteristic in statistical decision theory. The point characteristic function in statistics. == References ==
Wikipedia/Characteristic_function
The foot-pound force (symbol: ft⋅lbf, ft⋅lbf, or ft⋅lb ) is a unit of work or energy in the engineering and gravitational systems in United States customary and imperial units of measure. It is the energy transferred upon applying a force of one pound-force (lbf) through a linear displacement of one foot. The corresponding SI unit is the joule, though in terms of energy, one joule is not equal to one foot-pound. == Usage == The term foot-pound is also used as a unit of torque (see pound-foot (torque)). In the United States this is often used to specify, for example, the tightness of a fastener (such as screws and nuts) or the output of an engine. Although they are dimensionally equivalent, energy (a scalar) and torque (a Euclidean vector) are distinct physical quantities. Both energy and torque can be expressed as a product of a force vector with a displacement vector (hence pounds and feet); energy is the scalar product of the two, and torque is the vector product. Although calling the torque unit "pound-foot" has been academically suggested, both are still commonly called "foot-pound" in colloquial usage. To avoid confusion, it is not uncommon for people to specify each as "foot-pound of energy" or "foot-pound of torque" respectively. In small arms ballistics and particularly in the United States, the foot-pound is often used to specify the muzzle energy of a bullet. == Conversion factors == === Energy === 1 foot pound-force is equivalent to: 1.355818 joules or newton-metres 13,558,180 ergs 1.285067×10−3 British thermal units 0.3240483 calories 8.462351×1018 electronvolts = 8.462351×109 gigaelectronvolts === Power === 1 foot pound-force per second is equivalent to: 1.355818 watts 1.818182×10−3 horsepower Related conversions: 1 watt ≈ 44.25373 ft⋅lbf/min ≈ 0.7375621 ft⋅lbf/s 1 horsepower (mechanical) = 33,000 ft⋅lbf/min = 550 ft⋅lbf/s == See also == Conversion of units Pound-foot (torque) Poundal Slug (unit) Units of energy == References ==
Wikipedia/Foot-pound_(energy)
Magnetism is the class of physical attributes that occur through a magnetic field, which allows objects to attract or repel each other. Because both electric currents and magnetic moments of elementary particles give rise to a magnetic field, magnetism is one of two aspects of electromagnetism. The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt, nickel, and their alloys. All substances exhibit some type of magnetism. Magnetic materials are classified according to their bulk susceptibility. Ferromagnetism is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminium and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic. The strength of a magnetic field always decreases with distance from the magnetic source, though the exact mathematical relationship between strength and distance varies. Many factors can influence the magnetic field of an object including the magnetic moment of the material, the physical shape of the object, both the magnitude and direction of any electric current present within the object, and the temperature of the object. == History == Magnetism was first discovered in the ancient world when people noticed that lodestones, naturally magnetized pieces of the mineral magnetite, could attract iron. The word magnet comes from the Greek term μαγνῆτις λίθος magnētis lithos, "the Magnesian stone, lodestone". In ancient Greece, Aristotle attributed the first of what could be called a scientific discussion of magnetism to the philosopher Thales of Miletus, who lived from about 625 BCE to about 545 BCE. The ancient Indian medical text Sushruta Samhita describes using magnetite to remove arrows embedded in a person's body. In ancient China, the earliest literary reference to magnetism lies in a 4th-century BCE book named after its author, Guiguzi. The 2nd-century BCE annals, Lüshi Chunqiu, also notes: "The lodestone makes iron approach; some (force) is attracting it." The earliest mention of the attraction of a needle is in a 1st-century work Lunheng (Balanced Inquiries): "A lodestone attracts a needle." The 11th-century Chinese scientist Shen Kuo was the first person to write—in the Dream Pool Essays—of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north. By the 12th century, the Chinese were known to use the lodestone compass for navigation. They sculpted a directional spoon from lodestone in such a way that the handle of the spoon always pointed south. Alexander Neckam, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the Epistola de magnete, the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf Umar II, a Yemeni physicist, astronomer, and geographer. Leonardo Garzoni's only extant work, the Due trattati sopra la natura, e le qualità della calamita (Two treatises on the nature and qualities of the magnet), is the first known example of a modern treatment of magnetic phenomena. Written in years near 1580 and never published, the treatise had a wide diffusion. In particular, Garzoni is referred to as an expert in magnetism by Niccolò Cabeo, whose Philosophia Magnetica (1629) is just a re-adjustment of Garzoni's work. Garzoni's treatise was known also to Giovanni Battista Della Porta. In 1600, William Gilbert published his De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north whereas, previously, some believed that it was the pole star Polaris or a large magnetic island on the north pole that attracted the compass. An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Ørsted, a professor at the University of Copenhagen, who discovered, by the accidental twitching of a compass needle near a wire, that an electric current could create a magnetic field. This landmark experiment is known as Ørsted's Experiment. Jean-Baptiste Biot and Félix Savart, both of whom in 1820 came up with the Biot–Savart law giving an equation for the magnetic field from a current-carrying wire. Around the same time, André-Marie Ampère carried out numerous systematic experiments and discovered that the magnetic force between two DC current loops of any shape is equal to the sum of the individual forces that each current element of one circuit exerts on each other current element of the other circuit. In 1831, Michael Faraday discovered that a time-varying magnetic flux induces a voltage through a wire loop. In 1835, Carl Friedrich Gauss hypothesized, based on Ampère's force law in its original form, that all forms of magnetism arise as a result of elementary point charges moving relative to each other. Wilhelm Eduard Weber advanced Gauss's theory to Weber electrodynamics. From around 1861, James Clerk Maxwell synthesized and expanded many of these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. However, Gauss's interpretation of magnetism is not fully compatible with Maxwell's electrodynamics. In 1905, Albert Einstein used Maxwell's equations in motivating his theory of special relativity, requiring that the laws held true in all inertial reference frames. Gauss's approach of interpreting the magnetic force as a mere effect of relative velocities thus found its way back into electrodynamics to some extent. Electromagnetism has continued to develop into the 21st century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model. == Sources == Magnetism, at its root, arises from three sources: Electric current Spin magnetic moments of elementary particles Changing electric fields The magnetic properties of materials are mainly due to the magnetic moments of their atoms' orbiting electrons. The magnetic moments of the nuclei of atoms are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. Nuclear magnetic moments are nevertheless very important in other contexts, particularly in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). Ordinarily, the enormous number of electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments as a result of the Pauli exclusion principle (see electron configuration), and combining into filled subshells with zero net orbital motion. In both cases, the electrons preferentially adopt arrangements in which the magnetic moment of each electron is canceled by the opposite moment of another electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions so that the material will not be magnetic. Sometimes—either spontaneously, or owing to an applied external magnetic field—each of the electron magnetic moments will be, on average, lined up. A suitable material can then produce a strong net magnetic field. The magnetic behavior of a material depends on its structure, particularly its electron configuration, for the reasons mentioned above, and also on the temperature. At high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment. == Types == === Diamagnetism === Diamagnetism appears in all materials and is the tendency of a material to oppose an applied magnetic field, and therefore, to be repelled by a magnetic field. However, in a material with paramagnetic properties (that is, with a tendency to enhance an external magnetic field), the paramagnetic behavior dominates. Thus, despite its universal occurrence, diamagnetic behavior is observed only in a purely diamagnetic material. In a diamagnetic material, there are no unpaired electrons, so the intrinsic electron magnetic moments cannot produce any bulk effect. In these cases, the magnetization arises from the electrons' orbital motions, which can be understood classically as follows: When a material is put in a magnetic field, the electrons circling the nucleus will experience, in addition to their Coulomb attraction to the nucleus, a Lorentz force from the magnetic field. Depending on which direction the electron is orbiting, this force may increase the centripetal force on the electrons, pulling them in towards the nucleus, or it may decrease the force, pulling them away from the nucleus. This effect systematically increases the orbital magnetic moments that were aligned opposite the field and decreases the ones aligned parallel to the field (in accordance with Lenz's law). This results in a small bulk magnetic moment, with an opposite direction to the applied field. This description is meant only as a heuristic; the Bohr–Van Leeuwen theorem shows that diamagnetism is impossible according to classical physics, and that a proper understanding requires a quantum-mechanical description. All materials undergo this orbital response. However, in paramagnetic and ferromagnetic substances, the diamagnetic effect is overwhelmed by the much stronger effects caused by the unpaired electrons. === Paramagnetism === In a paramagnetic material there are unpaired electrons; i.e., atomic or molecular orbitals with exactly one electron in them. While paired electrons are required by the Pauli exclusion principle to have their intrinsic ('spin') magnetic moments pointing in opposite directions, causing their magnetic fields to cancel out, an unpaired electron is free to align its magnetic moment in any direction. When an external magnetic field is applied, these magnetic moments will tend to align themselves in the same direction as the applied field, thus reinforcing it. === Ferromagnetism === A ferromagnet, like a paramagnetic substance, has unpaired electrons. However, in addition to the electrons' intrinsic magnetic moment's tendency to be parallel to an applied field, there is also in these materials a tendency for these magnetic moments to orient parallel to each other to maintain a lowered-energy state. Thus, even in the absence of an applied field, the magnetic moments of the electrons in the material spontaneously line up parallel to one another. Every ferromagnetic substance has its own individual temperature, called the Curie temperature, or Curie point, above which it loses its ferromagnetic properties. This is because the thermal tendency to disorder overwhelms the energy-lowering due to ferromagnetic order. Ferromagnetism only occurs in a few substances; common ones are iron, nickel, cobalt, their alloys, and some alloys of rare-earth metals. ==== Magnetic domains ==== The magnetic moments of atoms in a ferromagnetic material cause them to behave something like tiny permanent magnets. They stick together and align themselves into small regions of more or less uniform alignment called magnetic domains or Weiss domains. Magnetic domains can be observed with a magnetic force microscope to reveal magnetic domain boundaries that resemble white lines in the sketch. There are many scientific experiments that can physically show magnetic fields. When a domain contains too many molecules, it becomes unstable and divides into two domains aligned in opposite directions so that they stick together more stably. When exposed to a magnetic field, the domain boundaries move, so that the domains aligned with the magnetic field grow and dominate the structure (dotted yellow area), as shown at the left. When the magnetizing field is removed, the domains may not return to an unmagnetized state. This results in the ferromagnetic material's being magnetized, forming a permanent magnet. When magnetized strongly enough that the prevailing domain overruns all others to result in only one single domain, the material is magnetically saturated. When a magnetized ferromagnetic material is heated to the Curie point temperature, the molecules are agitated to the point that the magnetic domains lose the organization, and the magnetic properties they cause cease. When the material is cooled, this domain alignment structure spontaneously returns, in a manner roughly analogous to how a liquid can freeze into a crystalline solid. === Antiferromagnetism === In an antiferromagnet, unlike a ferromagnet, there is a tendency for the intrinsic magnetic moments of neighboring valence electrons to point in opposite directions. When all atoms are arranged in a substance so that each neighbor is anti-parallel, the substance is antiferromagnetic. Antiferromagnets have a zero net magnetic moment because adjacent opposite moment cancels out, meaning that no field is produced by them. Antiferromagnets are less common compared to the other types of behaviors and are mostly observed at low temperatures. In varying temperatures, antiferromagnets can be seen to exhibit diamagnetic and ferromagnetic properties. In some materials, neighboring electrons prefer to point in opposite directions, but there is no geometrical arrangement in which each pair of neighbors is anti-aligned. This is called a canted antiferromagnet or spin ice and is an example of geometrical frustration. === Ferrimagnetism === Like ferromagnetism, ferrimagnets retain their magnetization in the absence of a field. However, like antiferromagnets, neighboring pairs of electron spins tend to point in opposite directions. These two properties are not contradictory, because in the optimal geometrical arrangement, there is more magnetic moment from the sublattice of electrons that point in one direction, than from the sublattice that points in the opposite direction. Most ferrites are ferrimagnetic. The first discovered magnetic substance, magnetite, is a ferrite and was originally believed to be a ferromagnet; Louis Néel disproved this, however, after discovering ferrimagnetism. === Superparamagnetism === When a ferromagnet or ferrimagnet is sufficiently small, it acts like a single magnetic spin that is subject to Brownian motion. Its response to a magnetic field is qualitatively similar to the response of a paramagnet, but much larger. === Nagaoka magnetism === Japanese physicist Yosuke Nagaoka conceived of a type of magnetism in a square, two-dimensional lattice where every lattice node had one electron. If one electron was removed under specific conditions, the lattice's energy would be minimal only when all electrons' spins were parallel. A variation on this was achieved experimentally by arranging the atoms in a triangular moiré lattice of molybdenum diselenide and tungsten disulfide monolayers. Applying a weak magnetic field and a voltage led to ferromagnetic behavior when 100–150% more electrons than lattice nodes were present. The extra electrons delocalized and paired with lattice electrons to form doublons. Delocalization was prevented unless the lattice electrons had aligned spins. The doublons thus created localized ferromagnetic regions. The phenomenon took place at 140 millikelvins. === Other types of magnetism === Metamagnetism Molecule-based magnets Single-molecule magnet Amorphous magnet == Electromagnet == An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. The magnetic field disappears when the current is turned off. Electromagnets usually consist of a large number of closely spaced turns of wire that create the magnetic field. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet. The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field. Electromagnets are widely used as components of other electrical devices, such as motors, generators, relays, solenoids, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel. Electromagnetism was discovered in 1820. == Magnetism, electricity, and special relativity == As a consequence of Einstein's theory of special relativity, electricity and magnetism are fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity. In particular, a phenomenon that appears purely electric or purely magnetic to one observer may be a mix of both to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism, analogous to how general relativity "mixes" space and time into spacetime. All observations on electromagnetism apply to what might be considered to be primarily magnetism, e.g. perturbations in the magnetic field are necessarily accompanied by a nonzero electric field, and propagate at the speed of light. == Magnetic fields in a material == In vacuum, B = μ 0 H , {\displaystyle \mathbf {B} \ =\ \mu _{0}\mathbf {H} ,} where μ0 is the vacuum permeability. In a material, B = μ 0 ( H + M ) . {\displaystyle \mathbf {B} \ =\ \mu _{0}(\mathbf {H} +\mathbf {M} ).\ } The quantity μ0M is called magnetic polarization. If the field H is small, the response of the magnetization M in a diamagnet or paramagnet is approximately linear: M = χ H , {\displaystyle \mathbf {M} =\chi \mathbf {H} ,} the constant of proportionality being called the magnetic susceptibility. If so, μ 0 ( H + M ) = μ 0 ( 1 + χ ) H = μ r μ 0 H = μ H . {\displaystyle \mu _{0}(\mathbf {H} +\mathbf {M} )\ =\ \mu _{0}(1+\chi )\mathbf {H} \ =\ \mu _{r}\mu _{0}\mathbf {H} \ =\ \mu \mathbf {H} .} In a hard magnet such as a ferromagnet, M is not proportional to the field and is generally nonzero even when H is zero (see Remanence). == Magnetic force == The phenomenon of magnetism is "mediated" by the magnetic field. An electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields. Maxwell's equations, which simplify to the Biot–Savart law in the case of steady currents, describe the origin and behavior of the fields that govern these forces. Therefore, magnetism is seen whenever electrically charged particles are in motion—for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles arising from quantum-mechanical spin. The same situations that create magnetic fields—charge moving in a current or in an atom, and intrinsic magnetic dipoles—are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole. When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product: q F = q ( v × B ) , {\displaystyle q\mathbf {F} =q(\mathbf {v} \times \mathbf {B} ),} where q {\displaystyle q} is the electric charge of the particle, and v {\displaystyle v} is the velocity vector of the particle. Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. The magnitude of the force is q F = q v B sin ⁡ θ , {\displaystyle qF=qvB\sin \theta \ ,} where θ {\displaystyle \theta } is the angle between v and B. One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right-hand rule. == Magnetic dipoles == A very common source of magnetic field found in nature is a dipole, with a "South pole" and a "North pole", terms dating back to the use of magnets as compasses, interacting with the Earth's magnetic field to indicate North and South on the globe. Since opposite ends of magnets are attracted, the north pole of a magnet is attracted to the south pole of another magnet. The Earth's North Magnetic Pole (currently in the Arctic Ocean, north of Canada) is physically a south pole, as it attracts the north pole of a compass. A magnetic field contains energy, and physical systems move toward configurations with lower energy. When diamagnetic material is placed in a magnetic field, a magnetic dipole tends to align itself in opposed polarity to that field, thereby lowering the net field strength. When ferromagnetic material is placed within a magnetic field, the magnetic dipoles align to the applied field, thus expanding the domain walls of the magnetic domains. === Magnetic monopoles === Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnet. Even though a magnet is said to have a north pole and a south pole, these two poles cannot be separated from each other. A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic object. It would act as an isolated north pole, not attached to a south pole, or vice versa. Monopoles would carry "magnetic charge" analogous to electric charge. Despite systematic searches since 1931, as of 2010, they have never been observed, and could very well not exist. Nevertheless, some theoretical physics models predict the existence of these magnetic monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show a certain symmetry, just as quantum theory predicts that individual positive or negative electric charges can be observed without the opposing charge, isolated South or North magnetic poles should be observable. Using quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron. Certain grand unified theories predict the existence of monopoles which, unlike elementary particles, are solitons (localized energy packets). The initial results of using these models to estimate the number of monopoles created in the Big Bang contradicted cosmological observations—the monopoles would have been so plentiful and massive that they would have long since halted the expansion of the universe. However, the idea of inflation (for which this problem served as a partial motivation) was successful in solving this problem, creating models in which monopoles existed but were rare enough to be consistent with current observations. == Units == === SI === === Other === gauss – the centimeter-gram-second (CGS) unit of magnetic field (denoted B). oersted – the CGS unit of magnetizing field (denoted H) maxwell – the CGS unit for magnetic flux gamma – a unit of magnetic flux density that was commonly used before the tesla came into use (1.0 gamma = 1.0 nanotesla) μ0 – common symbol for the permeability of free space (4π × 10−7 newton/(ampere-turn)2) == Living things == Some organisms can detect magnetic fields, a phenomenon known as magnetoception. Some materials in living things are ferromagnetic, though it is unclear if the magnetic properties serve a special function or are merely a byproduct of containing iron. For instance, chitons, a type of marine mollusk, produce magnetite to harden their teeth, and even humans produce magnetite in bodily tissue. Magnetobiology studies the effects of magnetic fields on living organisms; fields naturally produced by an organism are known as biomagnetism. Many biological organisms are mostly made of water, and because water is diamagnetic, extremely strong magnetic fields can repel these living things. == Interpretation of magnetism by means of relative velocities == In the years after 1820, André-Marie Ampère carried out numerous experiments in which he measured the forces between direct currents. In particular, he also studied the magnetic forces between non-parallel wires. The final result of his work was a force law that is now named after him. In 1835, Carl Friedrich Gauss realized that Ampere's force law in its original form can be explained by a generalization of Coulomb's law. Gauss's force law states that the electromagnetic force F 1 {\textstyle \mathbf {F} _{1}} experienced by a point charge, q 1 {\displaystyle q_{1}} with trajectory r 1 ( t ) {\displaystyle \mathbf {r} _{1}(t)} , in the vicinity of another point charge, q 2 {\displaystyle q_{2}} with trajectory r 2 ( t ) {\displaystyle \mathbf {r} _{2}(t)} , in a vacuum is equal to the central force F 1 = q 1 q 2 4 π ϵ 0 r | r | 3 ( 1 + | v | 2 c 2 − 3 2 ( r | r | ⋅ v c ) 2 ) {\displaystyle \mathbf {F} _{1}={\frac {q_{1}\,q_{2}}{4\,\pi \,\epsilon _{0}}}\,{\frac {\mathbf {r} }{|\mathbf {r} |^{3}}}\,\left(1+{\frac {|\mathbf {v} |^{2}}{c^{2}}}-{\frac {3}{2}}\,\left({\frac {\mathbf {r} }{|\mathbf {r} |}}\cdot {\frac {\mathbf {v} }{c}}\right)^{2}\right)} , where r = r 1 ( t ) − r 2 ( t ) {\textstyle \mathbf {r} =\mathbf {r} _{1}(t)-\mathbf {r} _{2}(t)} is the distance between the charges and v = r ˙ 1 ( t ) − r ˙ 2 ( t ) {\textstyle \mathbf {v} ={\dot {\mathbf {r} }}_{1}(t)-{\dot {\mathbf {r} }}_{2}(t)} is the relative velocity. Wilhelm Eduard Weber confirmed Gauss's hypothesis in numerous experiments. By means of Weber electrodynamics it is possible to explain the static and quasi-static effects in the non-relativistic regime of classical electrodynamics without magnetic field and Lorentz force. Since 1870, Maxwell electrodynamics has been developed, which postulates that electric and magnetic fields exist. In Maxwell's electrodynamics, the actual electromagnetic force can be calculated using the Lorentz force, which, like the Weber force, is speed-dependent. However, Maxwell's electrodynamics is not fully compatible with the work of Ampère, Gauss and Weber in the quasi-static regime. In particular, Ampère's original force law and the Biot-Savart law are only equivalent if the field-generating conductor loop is closed. Maxwell's electrodynamics therefore represents a break with the interpretation of magnetism by Gauss and Weber, since in Maxwell's electrodynamics it is no longer possible to deduce the magnetic force from a central force. == Quantum-mechanical origin of magnetism == While heuristic explanations based on classical physics can be formulated, diamagnetism, paramagnetism and ferromagnetism can be fully explained only using quantum theory. A successful model was developed already in 1927, by Walter Heitler and Fritz London, who derived, quantum-mechanically, how hydrogen molecules are formed from hydrogen atoms, i.e. from the atomic hydrogen orbitals u A {\displaystyle u_{A}} and u B {\displaystyle u_{B}} centered at the nuclei A and B, see below. That this leads to magnetism is not at all obvious, but will be explained in the following. According to the Heitler–London theory, so-called two-body molecular σ {\displaystyle \sigma } -orbitals are formed, namely the resulting orbital is: ψ ( r 1 , r 2 ) = 1 2 ( u A ( r 1 ) u B ( r 2 ) + u B ( r 1 ) u A ( r 2 ) ) {\displaystyle \psi (\mathbf {r} _{1},\,\,\mathbf {r} _{2})={\frac {1}{\sqrt {2}}}\,\,\left(u_{A}(\mathbf {r} _{1})u_{B}(\mathbf {r} _{2})+u_{B}(\mathbf {r} _{1})u_{A}(\mathbf {r} _{2})\right)} Here the last product means that a first electron, r1, is in an atomic hydrogen-orbital centered at the second nucleus, whereas the second electron runs around the first nucleus. This "exchange" phenomenon is an expression for the quantum-mechanical property that particles with identical properties cannot be distinguished. It is specific not only for the formation of chemical bonds, but also for magnetism. That is, in this connection the term exchange interaction arises, a term which is essential for the origin of magnetism, and which is stronger, roughly by factors 100 and even by 1000, than the energies arising from the electrodynamic dipole-dipole interaction. As for the spin function χ ( s 1 , s 2 ) {\displaystyle \chi (s_{1},s_{2})} , which is responsible for the magnetism, we have the already mentioned Pauli's principle, namely that a symmetric orbital (i.e. with the + sign as above) must be multiplied with an antisymmetric spin function (i.e. with a − sign), and vice versa. Thus: χ ( s 1 , s 2 ) = 1 2 ( α ( s 1 ) β ( s 2 ) − β ( s 1 ) α ( s 2 ) ) {\displaystyle \chi (s_{1},\,\,s_{2})={\frac {1}{\sqrt {2}}}\,\,\left(\alpha (s_{1})\beta (s_{2})-\beta (s_{1})\alpha (s_{2})\right)} , I.e., not only u A {\displaystyle u_{A}} and u B {\displaystyle u_{B}} must be substituted by α and β, respectively (the first entity means "spin up", the second one "spin down"), but also the sign + by the − sign, and finally ri by the discrete values si (= ±1⁄2); thereby we have α ( + 1 / 2 ) = β ( − 1 / 2 ) = 1 {\displaystyle \alpha (+1/2)=\beta (-1/2)=1} and α ( − 1 / 2 ) = β ( + 1 / 2 ) = 0 {\displaystyle \alpha (-1/2)=\beta (+1/2)=0} . The "singlet state", i.e. the − sign, means: the spins are antiparallel, i.e. for the solid we have antiferromagnetism, and for two-atomic molecules one has diamagnetism. The tendency to form a (homoeopolar) chemical bond (this means: the formation of a symmetric molecular orbital, i.e. with the + sign) results through the Pauli principle automatically in an antisymmetric spin state (i.e. with the − sign). In contrast, the Coulomb repulsion of the electrons, i.e. the tendency that they try to avoid each other by this repulsion, would lead to an antisymmetric orbital function (i.e. with the − sign) of these two particles, and complementary to a symmetric spin function (i.e. with the + sign, one of the so-called "triplet functions"). Thus, now the spins would be parallel (ferromagnetism in a solid, paramagnetism in two-atomic gases). The last-mentioned tendency dominates in the metals iron, cobalt and nickel, and in some rare earths, which are ferromagnetic. Most of the other metals, where the first-mentioned tendency dominates, are nonmagnetic (e.g. sodium, aluminium, and magnesium) or antiferromagnetic (e.g. manganese). Diatomic gases are also almost exclusively diamagnetic, and not paramagnetic. However, the oxygen molecule, because of the involvement of π-orbitals, is an exception important for the life-sciences. The Heitler-London considerations can be generalized to the Heisenberg model of magnetism (Heisenberg 1928). The explanation of the phenomena is thus essentially based on all subtleties of quantum mechanics, whereas the electrodynamics covers mainly the phenomenology. == See also == == References == == Further reading == == Bibliography == The Exploratorium Science Snacks – Subject:Physics/Electricity & Magnetism A collection of magnetic structures – MAGNDATA
Wikipedia/Magnetic_force
In classical mechanics, a central force on an object is a force that is directed towards or away from a point called center of force. F ( r ) = F ( r ) r {\displaystyle \mathbf {F} (\mathbf {r} )=F(\mathbf {r} ){\mathbf {r} }} where F is a force vector, F is a scalar valued force function (whose absolute value gives the magnitude of the force and is positive if the force is outward and negative if the force is inward), r is the position vector, ||r|| is its length, and r ^ = r / ‖ r ‖ {\textstyle {\hat {\mathbf {r} }}=\mathbf {r} /\|\mathbf {r} \|} is the corresponding unit vector.: 93  Not all central force fields are conservative or spherically symmetric. However, a central force is conservative if and only if it is spherically symmetric or rotationally invariant.: 133–38  Examples of spherically symmetric central forces include the Coulomb force and the force of gravity. == Properties == Central forces that are conservative can always be expressed as the negative gradient of a potential energy: F ( r ) = − ∇ V ( r ) , where V ( r ) = ∫ | r | + ∞ F ( r ) d r {\displaystyle \mathbf {F} (\mathbf {r} )=-\mathbf {\nabla } V(\mathbf {r} )\;{\text{, where }}V(\mathbf {r} )=\int _{|\mathbf {r} |}^{+\infty }F(r)\,\mathrm {d} r} (the upper bound of integration is arbitrary, as the potential is defined up to an additive constant). In a conservative field, the total mechanical energy (kinetic and potential) is conserved: E = 1 2 m | r ˙ | 2 + 1 2 I | ω | 2 + V ( r ) = constant {\displaystyle E={\tfrac {1}{2}}m|\mathbf {\dot {r}} |^{2}+{\tfrac {1}{2}}I|{\boldsymbol {\omega }}|^{2}+V(\mathbf {r} )={\text{constant}}} (where 'ṙ' denotes the derivative of 'r' with respect to time, that is the velocity,'I' denotes moment of inertia of that body and 'ω' denotes angular velocity), and in a central force field, so is the angular momentum: L = r × m r ˙ = constant {\displaystyle \mathbf {L} =\mathbf {r} \times m\mathbf {\dot {r}} ={\text{constant}}} because the torque exerted by the force is zero. As a consequence, the body moves on the plane perpendicular to the angular momentum vector and containing the origin, and obeys Kepler's second law. (If the angular momentum is zero, the body moves along the line joining it with the origin.) It can also be shown that an object that moves under the influence of any central force obeys Kepler's second law. However, the first and third laws depend on the inverse-square nature of Newton's law of universal gravitation and do not hold in general for other central forces. As a consequence of being conservative, these specific central force fields are irrotational, that is, its curl is zero, except at the origin: ∇ × F ( r ) = 0 . {\displaystyle \nabla \times \mathbf {F} (\mathbf {r} )=\mathbf {0} .} == Examples == Gravitational force and Coulomb force are two familiar examples with F ( r ) {\displaystyle F(\mathbf {r} )} being proportional to 1/r2 only. An object in such a force field with negative F ( r ) {\displaystyle F(\mathbf {r} )} (corresponding to an attractive force) obeys Kepler's laws of planetary motion. The force field of a spatial harmonic oscillator is central with F ( r ) {\displaystyle F(\mathbf {r} )} proportional to r only and negative. By Bertrand's theorem, these two, F ( r ) = − k / r 2 {\displaystyle F(\mathbf {r} )=-k/r^{2}} and F ( r ) = − k r {\displaystyle F(\mathbf {r} )=-kr} , are the only possible central force fields where all bounded orbits are stable closed orbits. However, there exist other force fields, which have some closed orbits. == See also == Classical central-force problem Particle in a spherically symmetric potential == References ==
Wikipedia/Central_force
Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. If F {\displaystyle {\textbf {F}}} is the total of the forces acting on the system, m {\displaystyle m} is the mass of the system and a {\displaystyle {\textbf {a}}} is the acceleration of the system, Newton's second law states that F = m a {\displaystyle {\textbf {F}}=m{\textbf {a}}\,} (the bold font indicates a vector quantity, i.e. one with both magnitude and direction). If a = 0 {\displaystyle {\textbf {a}}=0} , then F = 0 {\displaystyle {\textbf {F}}=0} . As for a system in static equilibrium, the acceleration equals zero, the system is either at rest, or its center of mass moves at constant velocity. The application of the assumption of zero acceleration to the summation of moments acting on the system leads to M = I α = 0 {\displaystyle {\textbf {M}}=I\alpha =0} , where M {\displaystyle {\textbf {M}}} is the summation of all moments acting on the system, I {\displaystyle I} is the moment of inertia of the mass and α {\displaystyle \alpha } is the angular acceleration of the system. For a system where α = 0 {\displaystyle \alpha =0} , it is also true that M = 0. {\displaystyle {\textbf {M}}=0.} Together, the equations F = m a = 0 {\displaystyle {\textbf {F}}=m{\textbf {a}}=0} (the 'first condition for equilibrium') and M = I α = 0 {\displaystyle {\textbf {M}}=I\alpha =0} (the 'second condition for equilibrium') can be used to solve for unknown quantities acting on the system. == History == Archimedes (c. 287–c. 212 BC) did pioneering work in statics. Later developments in the field of statics are found in works of Thebit. == Background == === Force === Force is the action of one body on another. A force is either a push or a pull, and it tends to move a body in the direction of its action. The action of a force is characterized by its magnitude, by the direction of its action, and by its point of application (or point of contact). Thus, force is a vector quantity, because its effect depends on the direction as well as on the magnitude of the action. Forces are classified as either contact or body forces. A contact force is produced by direct physical contact; an example is the force exerted on a body by a supporting surface. A body force is generated by virtue of the position of a body within a force field such as a gravitational, electric, or magnetic field and is independent of contact with any other body; an example of a body force is the weight of a body in the Earth's gravitational field. === Moment of a force === In addition to the tendency to move a body in the direction of its application, a force can also tend to rotate a body about an axis. The axis may be any line which neither intersects nor is parallel to the line of action of the force. This rotational tendency is known as moment of force (M). Moment is also referred to as torque. ==== Moment about a point ==== The magnitude of the moment of a force at a point O, is equal to the perpendicular distance from O to the line of action of F, multiplied by the magnitude of the force: M = F · d, where F = the force applied d = the perpendicular distance from the axis to the line of action of the force. This perpendicular distance is called the moment arm. The direction of the moment is given by the right hand rule, where counter clockwise (CCW) is out of the page, and clockwise (CW) is into the page. The moment direction may be accounted for by using a stated sign convention, such as a plus sign (+) for counterclockwise moments and a minus sign (−) for clockwise moments, or vice versa. Moments can be added together as vectors. In vector format, the moment can be defined as the cross product between the radius vector, r (the vector from point O to the line of action), and the force vector, F: M O = r × F {\displaystyle {\textbf {M}}_{O}={\textbf {r}}\times {\textbf {F}}} r = ( x 00 . . . x 0 j x 01 . . . x 1 j . . . . . . . . . x i 0 . . . x i j ) {\displaystyle r=\left({\begin{array}{cc}x_{00}&...&x_{0j}\\x_{01}&...&x_{1j}\\...&...&...\\x_{i0}&...&x_{ij}\\\end{array}}\right)} F = ( f 00 . . . f 0 j f 01 . . . f 1 j . . . . . . . . . f i 0 . . . f i j ) {\displaystyle F=\left({\begin{array}{cc}f_{00}&...&f_{0j}\\f_{01}&...&f_{1j}\\...&...&...\\f_{i0}&...&f_{ij}\\\end{array}}\right)} ==== Varignon's theorem ==== Varignon's theorem states that the moment of a force about any point is equal to the sum of the moments of the components of the force about the same point. === Equilibrium equations === The static equilibrium of a particle is an important concept in statics. A particle is in equilibrium only if the resultant of all forces acting on the particle is equal to zero. In a rectangular coordinate system the equilibrium equations can be represented by three scalar equations, where the sums of forces in all three directions are equal to zero. An engineering application of this concept is determining the tensions of up to three cables under load, for example the forces exerted on each cable of a hoist lifting an object or of guy wires restraining a hot air balloon to the ground. === Moment of inertia === In classical mechanics, moment of inertia, also called mass moment, rotational inertia, polar moment of inertia of mass, or the angular mass, (SI units kg·m²) is a measure of an object's resistance to changes to its rotation. It is the inertia of a rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in linear dynamics, describing the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. The symbols I and J are usually used to refer to the moment of inertia or polar moment of inertia. While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscopic motion. The concept was introduced by Leonhard Euler in his 1765 book Theoria motus corporum solidorum seu rigidorum; he discussed the moment of inertia and many related concepts, such as the principal axis of inertia. == Applications == === Solids === Statics is used in the analysis of structures, for instance in architectural and structural engineering. Strength of materials is a related field of mechanics that relies heavily on the application of static equilibrium. A key concept is the center of gravity of a body at rest: it represents an imaginary point at which all the mass of a body resides. The position of the point relative to the foundations on which a body lies determines its stability in response to external forces. If the center of gravity exists outside the foundations, then the body is unstable because there is a torque acting: any small disturbance will cause the body to fall or topple. If the center of gravity exists within the foundations, the body is stable since no net torque acts on the body. If the center of gravity coincides with the foundations, then the body is said to be metastable. === Fluids === Hydrostatics, also known as fluid statics, is the study of fluids at rest (i.e. in static equilibrium). The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same at all points at the same depth (or altitude) within the fluid. If the net force is greater than zero the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form by French mathematician and philosopher Blaise Pascal in 1647 and became known as Pascal's law. It has many important applications in hydraulics. Archimedes, Abū Rayhān al-Bīrūnī, Al-Khazini and Galileo Galilei were also major figures in the development of hydrostatics. == See also == Cremona diagram Dynamics Solid mechanics == Notes == == References == Beer, F.P. & Johnston Jr, E.R. (1992). Statics and Mechanics of Materials. McGraw-Hill, Inc. Beer, F.P.; Johnston Jr, E.R.; Eisenberg (2009). Vector Mechanics for Engineers: Statics, 9th Ed. McGraw Hill. ISBN 978-0-07-352923-3. Morelon, Régis; Rashed, Roshdi, eds. (1996), Encyclopedia of the History of Arabic Science, vol. 3, Routledge, ISBN 978-0415124102 == External links ==
Wikipedia/Point_of_application
In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. In a Euclidean space, any translation is an isometry. == As a function == If v {\displaystyle \mathbf {v} } is a fixed vector, known as the translation vector, and p {\displaystyle \mathbf {p} } is the initial position of some object, then the translation function T v {\displaystyle T_{\mathbf {v} }} will work as T v ( p ) = p + v {\displaystyle T_{\mathbf {v} }(\mathbf {p} )=\mathbf {p} +\mathbf {v} } . If T {\displaystyle T} is a translation, then the image of a subset A {\displaystyle A} under the function T {\displaystyle T} is the translate of A {\displaystyle A} by T {\displaystyle T} . The translate of A {\displaystyle A} by T v {\displaystyle T_{\mathbf {v} }} is often written as A + v {\displaystyle A+\mathbf {v} } . === Application in classical physics === In classical physics, translational motion is movement that changes the position of an object, as opposed to rotation. For example, according to Whittaker: If a body is moved from one position to another, and if the lines joining the initial and final points of each of the points of the body are a set of parallel straight lines of length ℓ, so that the orientation of the body in space is unaltered, the displacement is called a translation parallel to the direction of the lines, through a distance ℓ. A translation is the operation changing the positions of all points ( x , y , z ) {\displaystyle (x,y,z)} of an object according to the formula ( x , y , z ) → ( x + Δ x , y + Δ y , z + Δ z ) {\displaystyle (x,y,z)\to (x+\Delta x,y+\Delta y,z+\Delta z)} where ( Δ x , Δ y , Δ z ) {\displaystyle (\Delta x,\ \Delta y,\ \Delta z)} is the same vector for each point of the object. The translation vector ( Δ x , Δ y , Δ z ) {\displaystyle (\Delta x,\ \Delta y,\ \Delta z)} common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements. When considering spacetime, a change of time coordinate is considered to be a translation. == As an operator == The translation operator turns a function of the original position, f ( v ) {\displaystyle f(\mathbf {v} )} , into a function of the final position, f ( v + δ ) {\displaystyle f(\mathbf {v} +\mathbf {\delta } )} . In other words, T δ {\displaystyle T_{\mathbf {\delta } }} is defined such that T δ f ( v ) = f ( v + δ ) . {\displaystyle T_{\mathbf {\delta } }f(\mathbf {v} )=f(\mathbf {v} +\mathbf {\delta } ).} This operator is more abstract than a function, since T δ {\displaystyle T_{\mathbf {\delta } }} defines a relationship between two functions, rather than the underlying vectors themselves. The translation operator can act on many kinds of functions, such as when the translation operator acts on a wavefunction, which is studied in the field of quantum mechanics. == As a group == The set of all translations forms the translation group T {\displaystyle \mathbb {T} } , which is isomorphic to the space itself, and a normal subgroup of Euclidean group E ( n ) {\displaystyle E(n)} . The quotient group of E ( n ) {\displaystyle E(n)} by T {\displaystyle \mathbb {T} } is isomorphic to the group of rigid motions which fix a particular origin point, the orthogonal group O ( n ) {\displaystyle O(n)} : E ( n ) / T ≅ O ( n ) {\displaystyle E(n)/\mathbb {T} \cong O(n)} Because translation is commutative, the translation group is abelian. There are an infinite number of possible translations, so the translation group is an infinite group. In the theory of relativity, due to the treatment of space and time as a single spacetime, translations can also refer to changes in the time coordinate. For example, the Galilean group and the Poincaré group include translations with respect to time. === Lattice groups === One kind of subgroup of the three-dimensional translation group are the lattice groups, which are infinite groups, but unlike the translation groups, are finitely generated. That is, a finite generating set generates the entire group. == Matrix representation == A translation is an affine transformation with no fixed points. Matrix multiplications always have the origin as a fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication: Write the 3-dimensional vector v = ( v x , v y , v z ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})} using 4 homogeneous coordinates as v = ( v x , v y , v z , 1 ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z},1)} . To translate an object by a vector v {\displaystyle \mathbf {v} } , each homogeneous vector p {\displaystyle \mathbf {p} } (written in homogeneous coordinates) can be multiplied by this translation matrix: T v = [ 1 0 0 v x 0 1 0 v y 0 0 1 v z 0 0 0 1 ] {\displaystyle T_{\mathbf {v} }={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}} As shown below, the multiplication will give the expected result: T v p = [ 1 0 0 v x 0 1 0 v y 0 0 1 v z 0 0 0 1 ] [ p x p y p z 1 ] = [ p x + v x p y + v y p z + v z 1 ] = p + v {\displaystyle T_{\mathbf {v} }\mathbf {p} ={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\1\end{bmatrix}}={\begin{bmatrix}p_{x}+v_{x}\\p_{y}+v_{y}\\p_{z}+v_{z}\\1\end{bmatrix}}=\mathbf {p} +\mathbf {v} } The inverse of a translation matrix can be obtained by reversing the direction of the vector: T v − 1 = T − v . {\displaystyle T_{\mathbf {v} }^{-1}=T_{-\mathbf {v} }.\!} Similarly, the product of translation matrices is given by adding the vectors: T v T w = T v + w . {\displaystyle T_{\mathbf {v} }T_{\mathbf {w} }=T_{\mathbf {v} +\mathbf {w} }.\!} Because addition of vectors is commutative, multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices). == Translation of axes == While geometric translation is often viewed as an active transformation that changes the position of a geometric object, a similar result can be achieved by a passive transformation that moves the coordinate system itself but leaves the object fixed. The passive version of an active geometric translation is known as a translation of axes. == Translational symmetry == An object that looks the same before and after translation is said to have translational symmetry. A common example is a periodic function, which is an eigenfunction of a translation operator. == Translations of a graph == The graph of a real function f, the set of points ⁠ ( x , f ( x ) ) {\displaystyle (x,f(x))} ⁠, is often pictured in the real coordinate plane with x as the horizontal coordinate and ⁠ y = f ( x ) {\displaystyle y=f(x)} ⁠ as the vertical coordinate. Starting from the graph of f, a horizontal translation means composing f with a function ⁠ x ↦ x − a {\displaystyle x\mapsto x-a} ⁠, for some constant number a, resulting in a graph consisting of points ⁠ ( x , f ( x − a ) ) {\displaystyle (x,f(x-a))} ⁠. Each point ⁠ ( x , y ) {\displaystyle (x,y)} ⁠ of the original graph corresponds to the point ⁠ ( x + a , y ) {\displaystyle (x+a,y)} ⁠ in the new graph, which pictorially results in a horizontal shift. A vertical translation means composing the function ⁠ y ↦ y + b {\displaystyle y\mapsto y+b} ⁠ with f, for some constant b, resulting in a graph consisting of the points ⁠ ( x , f ( x ) + b ) {\displaystyle {\bigl (}x,f(x)+b{\bigr )}} ⁠. Each point ⁠ ( x , y ) {\displaystyle (x,y)} ⁠ of the original graph corresponds to the point ⁠ ( x , y + b ) {\displaystyle (x,y+b)} ⁠ in the new graph, which pictorially results in a vertical shift. For example, taking the quadratic function ⁠ y = x 2 {\displaystyle y=x^{2}} ⁠, whose graph is a parabola with vertex at ⁠ ( 0 , 0 ) {\displaystyle (0,0)} ⁠, a horizontal translation 5 units to the right would be the new function ⁠ y = ( x − 5 ) 2 = x 2 − 10 x + 25 {\displaystyle y=(x-5)^{2}=x^{2}-10x+25} ⁠ whose vertex has coordinates ⁠ ( 5 , 0 ) {\displaystyle (5,0)} ⁠. A vertical translation 3 units upward would be the new function ⁠ y = x 2 + 3 {\displaystyle y=x^{2}+3} ⁠ whose vertex has coordinates ⁠ ( 0 , 3 ) {\displaystyle (0,3)} ⁠. The antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other. == Applications == For describing vehicle dynamics (or movement of any rigid body), including ship dynamics and aircraft dynamics, it is common to use a mechanical model consisting of six degrees of freedom, which includes translations along three reference axes (as well as rotations about those three axes). These translations are often called surge, sway, and heave. == See also == == References == == Further reading == Zazkis, R., Liljedahl, P., & Gadowsky, K. Conceptions of function translation: obstacles, intuitions, and rerouting. Journal of Mathematical Behavior, 22, 437-450. Retrieved April 29, 2014, from www.elsevier.com/locate/jmathb Transformations of Graphs: Horizontal Translations. (2006, January 1). BioMath: Transformation of Graphs. Retrieved April 29, 2014 == External links == Translation Transform at cut-the-knot Geometric Translation (Interactive Animation) at Math Is Fun Understanding 2D Translation and Understanding 3D Translation by Roger Germundsson, The Wolfram Demonstrations Project.
Wikipedia/Translation_(physics)
Luminiferous aether or ether (luminiferous meaning 'light-bearing') was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space (a vacuum), something that waves should not be able to do. The assumption of a spatial plenum (space completely filled with matter) of luminiferous aether, rather than a spatial vacuum, provided the theoretical medium that was required by wave theories of light. The aether hypothesis was the topic of considerable debate throughout its history, as it required the existence of an invisible and infinite material with no interaction with physical objects. As the nature of light was explored, especially in the 19th century, the physical qualities required of an aether became increasingly contradictory. By the late 19th century, the existence of the aether was being questioned, although there was no physical theory to replace it. The negative outcome of the Michelson–Morley experiment (1887) suggested that the aether did not exist, a finding that was confirmed in subsequent experiments through the 1920s. This led to considerable theoretical work to explain the propagation of light without an aether. A major breakthrough was the special theory of relativity, which could explain why the experiment failed to see aether, but was more broadly interpreted to suggest that it was not needed. The Michelson–Morley experiment, along with the blackbody radiator and photoelectric effect, was a key experiment in the development of modern physics, which includes both relativity and quantum theory, the latter of which explains the particle-like nature of light. == History of light and aether == === Particles vs. waves === In the 17th century, Robert Boyle was a proponent of an aether hypothesis. According to Boyle, the aether consists of subtle particles, one sort of which explains the absence of vacuum and the mechanical interactions between bodies, and the other sort of which explains phenomena such as magnetism (and possibly gravity) that are, otherwise, inexplicable on the basis of purely mechanical interactions of macroscopic bodies, "though in the ether of the ancients there was nothing taken notice of but a diffused and very subtle substance; yet we are at present content to allow that there is always in the air a swarm of streams moving in a determinate course between the north pole and the south". Christiaan Huygens's Treatise on Light (1690) hypothesized that light is a wave propagating through an aether. He and Isaac Newton could only envision light waves as being longitudinal, propagating like sound and other mechanical waves in fluids. However, longitudinal waves necessarily have only one form for a given propagation direction, rather than two polarizations like a transverse wave. Thus, longitudinal waves can not explain birefringence, in which two polarizations of light are refracted differently by a crystal. In addition, Newton rejected light as waves in a medium because such a medium would have to extend everywhere in space, and would thereby "disturb and retard the Motions of those great Bodies" (the planets and comets) and thus "as it [light's medium] is of no use, and hinders the Operation of Nature, and makes her languish, so there is no evidence for its Existence, and therefore it ought to be rejected". Isaac Newton contended that light is made up of numerous small particles. This can explain such features as light's ability to travel in straight lines and reflect off surfaces. Newton imagined light particles as non-spherical "corpuscles", with different "sides" that give rise to birefringence. But the particle theory of light can not satisfactorily explain refraction and diffraction. To explain refraction, Newton's Third Book of Opticks (1st ed. 1704, 4th ed. 1730) postulated an "aethereal medium" transmitting vibrations faster than light, by which light, when overtaken, is put into "Fits of easy Reflexion and easy Transmission", which caused refraction and diffraction. Newton believed that these vibrations were related to heat radiation: Is not the Heat of the warm Room convey'd through the vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum? And is not this Medium the same with that Medium by which Light is refracted and reflected, and by whose Vibrations Light communicates Heat to Bodies, and is put into Fits of easy Reflexion and easy Transmission?: 349  In contrast to the modern understanding that heat radiation and light are both electromagnetic radiation, Newton viewed heat and light as two different phenomena. He believed heat vibrations to be excited "when a Ray of Light falls upon the Surface of any pellucid Body".: 348  He wrote, "I do not know what this Aether is", but that if it consists of particles then they must be exceedingly smaller than those of Air, or even than those of Light: The exceeding smallness of its Particles may contribute to the greatness of the force by which those Particles may recede from one another, and thereby make that Medium exceedingly more rare and elastic than Air, and by consequence exceedingly less able to resist the motions of Projectiles, and exceedingly more able to press upon gross Bodies, by endeavoring to expand itself.: 352  === Bradley suggests particles === In 1720, James Bradley carried out a series of experiments attempting to measure stellar parallax by taking measurements of stars at different times of the year. As the Earth moves around the Sun, the apparent angle to a given distant spot changes. By measuring those angles the distance to the star can be calculated based on the known orbital circumference of the Earth around the Sun. He failed to detect any parallax, thereby placing a lower limit on the distance to stars. During these experiments, Bradley also discovered a related effect; the apparent positions of the stars did change over the year, but not as expected. Instead of the apparent angle being maximized when the Earth was at either end of its orbit with respect to the star, the angle was maximized when the Earth was at its fastest sideways velocity with respect to the star. This effect is now known as stellar aberration. Bradley explained this effect in the context of Newton's corpuscular theory of light, by showing that the aberration angle was given by simple vector addition of the Earth's orbital velocity and the velocity of the corpuscles of light, just as vertically falling raindrops strike a moving object at an angle. Knowing the Earth's velocity and the aberration angle enabled him to estimate the speed of light. Explaining stellar aberration in the context of an aether-based theory of light was regarded as more problematic. As the aberration relied on relative velocities, and the measured velocity was dependent on the motion of the Earth, the aether had to be remaining stationary with respect to the star as the Earth moved through it. This meant that the Earth could travel through the aether, a physical medium, with no apparent effect – precisely the problem that led Newton to reject a wave model in the first place. === Wave-theory triumphs === A century later, Thomas Young and Augustin-Jean Fresnel revived the wave theory of light when they pointed out that light could be a transverse wave rather than a longitudinal wave; the polarization of a transverse wave (like Newton's "sides" of light) could explain birefringence, and in the wake of a series of experiments on diffraction the particle model of Newton was finally abandoned. Physicists assumed, moreover, that, like mechanical waves, light waves required a medium for propagation, and thus required Huygens's idea of an aether "gas" permeating all space. However, a transverse wave apparently required the propagating medium to behave as a solid, as opposed to a fluid. The idea of a solid that did not interact with other matter seemed a bit odd, and Augustin-Louis Cauchy suggested that perhaps there was some sort of "dragging", or "entrainment", but this made the aberration measurements difficult to understand. He also suggested that the absence of longitudinal waves suggested that the aether had negative compressibility. George Green pointed out that such a fluid would be unstable. George Gabriel Stokes became a champion of the entrainment interpretation, developing a model in which the aether might, like pine pitch, be dilatant (fluid at slow speeds and rigid at fast speeds). Thus the Earth could move through it fairly freely, but it would be rigid enough to support light. === Electromagnetism === In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the numerical value of the ratio of the electrostatic unit of charge to the electromagnetic unit of charge. They found that the ratio between the electrostatic unit of charge and the electromagnetic unit of charge is the speed of light c. The following year, Gustav Kirchhoff wrote a paper in which he showed that the speed of a signal along an electric wire was equal to the speed of light. These are the first recorded historical links between the speed of light and electromagnetic phenomena. James Clerk Maxwell began working on Michael Faraday's lines of force. In his 1861 paper On Physical Lines of Force he modelled these magnetic lines of force using a sea of molecular vortices that he considered to be partly made of aether and partly made of ordinary matter. He derived expressions for the dielectric constant and the magnetic permeability in terms of the transverse elasticity and the density of this elastic medium. He then equated the ratio of the dielectric constant to the magnetic permeability with a suitably adapted version of Weber and Kohlrausch's result of 1856, and he substituted this result into Newton's equation for the speed of sound. On obtaining a value that was close to the speed of light as measured by Hippolyte Fizeau, Maxwell concluded that light consists in undulations of the same medium that is the cause of electric and magnetic phenomena. Maxwell had, however, expressed some uncertainties surrounding the precise nature of his molecular vortices and so he began to embark on a purely dynamical approach to the problem. He wrote another paper in 1864, entitled "A Dynamical Theory of the Electromagnetic Field", in which the details of the luminiferous medium were less explicit. Although Maxwell did not explicitly mention the sea of molecular vortices, his derivation of Ampère's circuital law was carried over from the 1861 paper and he used a dynamical approach involving rotational motion within the electromagnetic field which he likened to the action of flywheels. Using this approach to justify the electromotive force equation (the precursor of the Lorentz force equation), he derived a wave equation from a set of eight equations which appeared in the paper and which included the electromotive force equation and Ampère's circuital law. Maxwell once again used the experimental results of Weber and Kohlrausch to show that this wave equation represented an electromagnetic wave that propagates at the speed of light, hence supporting the view that light is a form of electromagnetic radiation. In 1887–1889, Heinrich Hertz experimentally demonstrated the electric magnetic waves are identical to light waves. This unification of electromagnetic wave and optics indicated that there was a single luminiferous aether instead of many different kinds of aether media. The apparent need for a propagation medium for such Hertzian waves (later called radio waves) can be seen by the fact that they consist of orthogonal electric (E) and magnetic (B or H) waves. The E waves consist of undulating dipolar electric fields, and all such dipoles appeared to require separated and opposite electric charges. Electric charge is an inextricable property of matter, so it appeared that some form of matter was required to provide the alternating current that would seem to have to exist at any point along the propagation path of the wave. Propagation of waves in a true vacuum would imply the existence of electric fields without associated electric charge, or of electric charge without associated matter. Albeit compatible with Maxwell's equations, electromagnetic induction of electric fields could not be demonstrated in vacuum, because all methods of detecting electric fields required electrically charged matter. In addition, Maxwell's equations required that all electromagnetic waves in vacuum propagate at a fixed speed, c. As this can only occur in one reference frame in Newtonian physics (see Galilean relativity), the aether was hypothesized as the absolute and unique frame of reference in which Maxwell's equations hold. That is, the aether must be "still" universally, otherwise c would vary along with any variations that might occur in its supportive medium. Maxwell himself proposed several mechanical models of aether based on wheels and gears, and George Francis FitzGerald even constructed a working model of one of them. These models had to agree with the fact that the electromagnetic waves are transverse but never longitudinal. === Problems === By this point the mechanical qualities of the aether had become more and more magical: it had to be a fluid in order to fill space, but one that was millions of times more rigid than steel in order to support the high frequencies of light waves. It also had to be massless and without viscosity, otherwise it would visibly affect the orbits of planets. Additionally it appeared it had to be completely transparent, non-dispersive, incompressible, and continuous at a very small scale. Maxwell wrote in Encyclopædia Britannica: Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, until all space had been filled three or four times over with aethers. ... The only aether which has survived is that which was invented by Huygens to explain the propagation of light. By the early 20th century, aether theory was in trouble. A series of increasingly complex experiments had been carried out in the late 19th century to try to detect the motion of the Earth through the aether, and had failed to do so. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Lorentz and FitzGerald offered within the framework of Lorentz ether theory a more elegant solution to how the motion of an absolute aether could be undetectable (length contraction), but if their equations were correct, the new special theory of relativity (1905) could generate the same mathematics without referring to an aether at all. Aether fell to Occam's Razor. == Relative motion between the Earth and aether == === Aether drag === The two most important models, which were aimed to describe the relative motion of the Earth and aether, were Augustin-Jean Fresnel's (1818) model of the (nearly) stationary aether including a partial aether drag determined by Fresnel's dragging coefficient, and George Gabriel Stokes' (1844) model of complete aether drag. The latter theory was not considered as correct, since it was not compatible with the aberration of light, and the auxiliary hypotheses developed to explain this problem were not convincing. Also, subsequent experiments as the Sagnac effect (1913) also showed that this model is untenable. However, the most important experiment supporting Fresnel's theory was Fizeau's 1851 experimental confirmation of Fresnel's 1818 prediction that a medium with refractive index n moving with a velocity v would increase the speed of light travelling through the medium in the same direction as v from c/n to: That is, movement adds only a fraction of the medium's velocity to the light (predicted by Fresnel in order to make Snell's law work in all frames of reference, consistent with stellar aberration). This was initially interpreted to mean that the medium drags the aether along, with a portion of the medium's velocity, but that understanding became very problematic after Wilhelm Veltmann demonstrated that the index n in Fresnel's formula depended upon the wavelength of light, so that the aether could not be moving at a wavelength-independent speed. This implied that there must be a separate aether for each of the infinitely many frequencies. === Negative aether-drift experiments === The key difficulty with Fresnel's aether hypothesis arose from the juxtaposition of the two well-established theories of Newtonian dynamics and Maxwell's electromagnetism. Under a Galilean transformation the equations of Newtonian dynamics are invariant, whereas those of electromagnetism are not. Basically this means that while physics should remain the same in non-accelerated experiments, light would not follow the same rules because it is travelling in the universal "aether frame". Some effect caused by this difference should be detectable. A simple example concerns the model on which aether was originally built: sound. The speed of propagation for mechanical waves, the speed of sound, is defined by the mechanical properties of the medium. Sound travels 4.3 times faster in water than in air. This explains why a person hearing an explosion underwater and quickly surfacing can hear it again as the slower travelling sound arrives through the air. Similarly, a traveller on an airliner can still carry on a conversation with another traveller because the sound of words is travelling along with the air inside the aircraft. This effect is basic to all Newtonian dynamics, which says that everything from sound to the trajectory of a thrown baseball should all remain the same in the aircraft flying (at least at a constant speed) as if still sitting on the ground. This is the basis of the Galilean transformation, and the concept of frame of reference. But the same was not supposed to be true for light, since Maxwell's mathematics demanded a single universal speed for the propagation of light, based, not on local conditions, but on two measured properties, the permittivity and permeability of free space, that were assumed to be the same throughout the universe. If these numbers did change, there should be noticeable effects in the sky; stars in different directions would have different colours, for instance. Thus at any point there should be one special coordinate system, "at rest relative to the aether". Maxwell noted in the late 1870s that detecting motion relative to this aether should be easy enough—light travelling along with the motion of the Earth would have a different speed than light travelling backward, as they would both be moving against the unmoving aether. Even if the aether had an overall universal flow, changes in position during the day/night cycle, or over the span of seasons, should allow the drift to be detected. ==== First-order experiments ==== Although the aether is almost stationary according to Fresnel, his theory predicts a positive outcome of aether drift experiments only to second order in v / c {\displaystyle v/c} because Fresnel's dragging coefficient would cause a negative outcome of all optical experiments capable of measuring effects to first order in v / c {\displaystyle v/c} . This was confirmed by the following first-order experiments, all of which gave negative results. The following list is based on the description of Wilhelm Wien (1898), with changes and additional experiments according to the descriptions of Edmund Taylor Whittaker (1910) and Jakob Laub (1910): The experiment of François Arago (1810), to confirm whether refraction, and thus the aberration of light, is influenced by Earth's motion. Similar experiments were conducted by George Biddell Airy (1871) by means of a telescope filled with water, and Éleuthère Mascart (1872). The experiment of Fizeau (1860), to find whether the rotation of the polarization plane through glass columns is changed by Earth's motion. He obtained a positive result, but Lorentz could show that the results have been contradictory. DeWitt Bristol Brace (1905) and Strasser (1907) repeated the experiment with improved accuracy, and obtained negative results. The experiment of Martin Hoek (1868). This experiment is a more precise variation of the Fizeau experiment (1851). Two light rays were sent in opposite directions – one of them traverses a path filled with resting water, the other one follows a path through air. In agreement with Fresnel's dragging coefficient, he obtained a negative result. The experiment of Wilhelm Klinkerfues (1870) investigated whether an influence of Earth's motion on the absorption line of sodium exists. He obtained a positive result, but this was shown to be an experimental error, because a repetition of the experiment by Haga (1901) gave a negative result. The experiment of Ketteler (1872), in which two rays of an interferometer were sent in opposite directions through two mutually inclined tubes filled with water. No change of the interference fringes occurred. Later, Mascart (1872) showed that the interference fringes of polarized light in calcite remained uninfluenced as well. The experiment of Éleuthère Mascart (1872) to find a change of rotation of the polarization plane in quartz. No change of rotation was found when the light rays had the direction of Earth's motion and then the opposite direction. Lord Rayleigh conducted similar experiments with improved accuracy, and obtained a negative result as well. Besides those optical experiments, also electrodynamic first-order experiments were conducted, which should have led to positive results according to Fresnel. However, Hendrik Antoon Lorentz (1895) modified Fresnel's theory and showed that those experiments can be explained by a stationary aether as well: The experiment of Wilhelm Röntgen (1888), to find whether a charged capacitor produces magnetic forces due to Earth's motion. The experiment of Theodor des Coudres (1889), to find whether the inductive effect of two wire rolls upon a third one is influenced by the direction of Earth's motion. Lorentz showed that this effect is cancelled to first order by the electrostatic charge (produced by Earth's motion) upon the conductors. The experiment of Königsberger (1905). The plates of a capacitor are located in the field of a strong electromagnet. Due to Earth's motion, the plates should have become charged. No such effect was observed. The experiment of Frederick Thomas Trouton (1902). A capacitor was brought parallel to Earth's motion, and it was assumed that momentum is produced when the capacitor is charged. The negative result can be explained by Lorentz's theory, according to which the electromagnetic momentum compensates the momentum due to Earth's motion. Lorentz could also show, that the sensitivity of the apparatus was much too low to observe such an effect. ==== Second-order experiments ==== While the first-order experiments could be explained by a modified stationary aether, more precise second-order experiments were expected to give positive results. However, no such results could be found. The famous Michelson–Morley experiment compared the source light with itself after being sent in different directions and looked for changes in phase in a manner that could be measured with extremely high accuracy. In this experiment, their goal was to determine the velocity of the Earth through the aether. The publication of their result in 1887, the null result, was the first clear demonstration that something was seriously wrong with the aether hypothesis (Michelson's first experiment in 1881 was not entirely conclusive). In this case the MM experiment yielded a shift of the fringing pattern of about 0.01 of a fringe, corresponding to a small velocity. However, it was incompatible with the expected aether wind effect due to the Earth's (seasonally varying) velocity which would have required a shift of 0.4 of a fringe, and the error was small enough that the value may have indeed been zero. Therefore, the null hypothesis, the hypothesis that there was no aether wind, could not be rejected. More modern experiments have since reduced the possible value to a number very close to zero, about 10−17. It is obvious from what has gone before that it would be hopeless to attempt to solve the question of the motion of the solar system by observations of optical phenomena at the surface of the earth. A series of experiments using similar but increasingly sophisticated apparatuses all returned the null result as well. Conceptually different experiments that also attempted to detect the motion of the aether were the Trouton–Noble experiment (1903), whose objective was to detect torsion effects caused by electrostatic fields, and the experiments of Rayleigh and Brace (1902, 1904), to detect double refraction in various media. However, all of them obtained a null result, like Michelson–Morley (MM) previously did. These "aether-wind" experiments led to a flurry of efforts to "save" aether by assigning to it ever more complex properties, and only a few scientists, like Emil Cohn or Alfred Bucherer, considered the possibility of the abandonment of the aether hypothesis. Of particular interest was the possibility of "aether entrainment" or "aether drag", which would lower the magnitude of the measurement, perhaps enough to explain the results of the Michelson–Morley experiment. However, as noted earlier, aether dragging already had problems of its own, notably aberration. In addition, the interference experiments of Lodge (1893, 1897) and Ludwig Zehnder (1895), aimed to show whether the aether is dragged by various, rotating masses, showed no aether drag. A more precise measurement was made in the Hammar experiment (1935), which ran a complete MM experiment with one of the "legs" placed between two massive lead blocks. If the aether was dragged by mass then this experiment would have been able to detect the drag caused by the lead, but again the null result was achieved. The theory was again modified, this time to suggest that the entrainment only worked for very large masses or those masses with large magnetic fields. This too was shown to be incorrect by the Michelson–Gale–Pearson experiment, which detected the Sagnac effect due to Earth's rotation (see Aether drag hypothesis). Another completely different attempt to save "absolute" aether was made in the Lorentz–FitzGerald contraction hypothesis, which posited that everything was affected by travel through the aether. In this theory, the reason that the Michelson–Morley experiment "failed" was that the apparatus contracted in length in the direction of travel. That is, the light was being affected in the "natural" manner by its travel through the aether as predicted, but so was the apparatus itself, cancelling out any difference when measured. FitzGerald had inferred this hypothesis from a paper by Oliver Heaviside. Without referral to an aether, this physical interpretation of relativistic effects was shared by Kennedy and Thorndike in 1932 as they concluded that the interferometer's arm contracts and also the frequency of its light source "very nearly" varies in the way required by relativity. Similarly, the Sagnac effect, observed by G. Sagnac in 1913, was immediately seen to be fully consistent with special relativity. In fact, the Michelson–Gale–Pearson experiment in 1925 was proposed specifically as a test to confirm the relativity theory, although it was also recognized that such tests, which merely measure absolute rotation, are also consistent with non-relativistic theories. During the 1920s, the experiments pioneered by Michelson were repeated by Dayton Miller, who publicly proclaimed positive results on several occasions, although they were not large enough to be consistent with any known aether theory. However, other researchers were unable to duplicate Miller's claimed results. Over the years the experimental accuracy of such measurements has been raised by many orders of magnitude, and no trace of any violations of Lorentz invariance has been seen. (A later re-analysis of Miller's results concluded that he had underestimated the variations due to temperature.) Since the Miller experiment and its unclear results there have been many more experimental attempts to detect the aether. Many experimenters have claimed positive results. These results have not gained much attention from mainstream science, since they contradict a large quantity of high-precision measurements, all the results of which were consistent with special relativity. == Lorentz aether theory == Between 1892 and 1904, Hendrik Lorentz developed an electron–aether theory, in which he avoided making assumptions about the aether. In his model the aether is completely motionless, and by that he meant that it could not be set in motion in the neighborhood of ponderable matter. Contrary to earlier electron models, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field cannot propagate faster than the speed of light. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that an observer moving relative to the aether makes the same observations as a resting observer, after a suitable change of variables. Lorentz noticed that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892) to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. This resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900) and Lorentz (1899, 1904), whereby (it was noted by Larmor) the complete formulation of local time is accompanied by some sort of time dilation of electrons moving in the aether. As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice. Therefore, Lorentz's theorem is seen by modern authors as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion. The work of Lorentz was mathematically perfected by Henri Poincaré, who formulated on many occasions the Principle of Relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 and 1904 he physically interpreted Lorentz's local time as the result of clock synchronization by light signals. In June and July 1905 he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. However, he used the notion of an aether as a perfectly undetectable medium and distinguished between apparent and real time, so most historians of science argue that he failed to invent special relativity. == End of aether == === Special relativity === Aether theory was dealt another blow when the Galilean transformation and Newtonian dynamics were both modified by Albert Einstein's special theory of relativity, giving the mathematics of Lorentzian electrodynamics a new, "non-aether" context. Unlike most major shifts in scientific thought, special relativity was adopted by the scientific community remarkably quickly, consistent with Einstein's later comment that the laws of physics described by the Special Theory were "ripe for discovery" in 1905. Max Planck's early advocacy of the special theory, along with the elegant formulation given to it by Hermann Minkowski, contributed much to the rapid acceptance of special relativity among working scientists. Einstein based his theory on Lorentz's earlier work. Instead of suggesting that the mechanical properties of objects changed with their constant-velocity motion through an undetectable aether, Einstein proposed to deduce the characteristics that any successful theory must possess in order to be consistent with the most basic and firmly established principles, independent of the existence of a hypothetical aether. He found that the Lorentz transformation must transcend its connection with Maxwell's equations, and must represent the fundamental relations between the space and time coordinates of inertial frames of reference. In this way he demonstrated that the laws of physics remained invariant as they had with the Galilean transformation, but that light was now invariant as well. With the development of the special theory of relativity, the need to account for a single universal frame of reference had disappeared – and acceptance of the 19th-century theory of a luminiferous aether disappeared with it. For Einstein, the Lorentz transformation implied a conceptual change: that the concept of position in space or time was not absolute, but could differ depending on the observer's location and velocity. Moreover, in another paper published the same month in 1905, Einstein made several observations on a then-thorny problem, the photoelectric effect. In this work he demonstrated that light can be considered as particles that have a "wave-like nature". Particles obviously do not need a medium to travel, and thus, neither did light. This was the first step that would lead to the full development of quantum mechanics, in which the wave-like nature and the particle-like nature of light are both considered as valid descriptions of light. A summary of Einstein's thinking about the aether hypothesis, relativity and light quanta may be found in his 1909 (originally German) lecture "The Development of Our Views on the Composition and Essence of Radiation". Lorentz on his side continued to use the aether hypothesis. In his lectures of around 1911, he pointed out that what "the theory of relativity has to say ... can be carried out independently of what one thinks of the aether and the time". He commented that "whether there is an aether or not, electromagnetic fields certainly exist, and so also does the energy of the electrical oscillations" so that, "if we do not like the name of 'aether', we must use another word as a peg to hang all these things upon". He concluded that "one cannot deny the bearer of these concepts a certain substantiality". Nevertheless, in 1920, Einstein gave an address at Leiden University in which he commented "More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity". He concluded his address by saying that "according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable." === Other models === In later years there have been a few individuals who advocated a neo-Lorentzian approach to physics, which is Lorentzian in the sense of positing an absolute true state of rest that is undetectable and which plays no role in the predictions of the theory. (No violations of Lorentz covariance have ever been detected, despite strenuous efforts.) Hence these theories resemble the 19th century aether theories in name only. For example, the founder of quantum field theory, Paul Dirac, stated in 1951 in an article in Nature, titled "Is there an Aether?" that "we are rather forced to have an aether". However, Dirac never formulated a complete theory, and so his speculations found no acceptance by the scientific community. === Einstein's views on the aether === When Einstein was still a student in the Zurich Polytechnic in 1900, he was very interested in the idea of aether. His initial proposal of research thesis was to do an experiment to measure how fast the Earth was moving through the aether. "The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces." In 1916, after Einstein completed his foundational work on general relativity, Lorentz wrote a letter to him in which he speculated that within general relativity the aether was re-introduced. In his response Einstein wrote that one can actually speak about a "new aether", but one may not speak of motion in relation to that aether. This was further elaborated by Einstein in some semi-popular articles (1918, 1920, 1924, 1930). In 1918, Einstein publicly alluded to that new definition for the first time. Then, in the early 1920s, in a lecture which he was invited to give at Lorentz's university in Leiden, Einstein sought to reconcile the theory of relativity with Lorentzian aether. In this lecture Einstein stressed that special relativity took away the last mechanical property of the aether: immobility. However, he continued that special relativity does not necessarily rule out the aether, because the latter can be used to give physical reality to acceleration and rotation. This concept was fully elaborated within general relativity, in which physical properties (which are partially determined by matter) are attributed to space, but no substance or state of motion can be attributed to that "aether" (by which he meant curved space-time). In another paper of 1924, named "Concerning the Aether", Einstein argued that Newton's absolute space, in which acceleration is absolute, is the "Aether of Mechanics". And within the electromagnetic theory of Maxwell and Lorentz one can speak of the "Aether of Electrodynamics", in which the aether possesses an absolute state of motion. As regards special relativity, also in this theory acceleration is absolute as in Newton's mechanics. However, the difference from the electromagnetic aether of Maxwell and Lorentz lies in the fact that "because it was no longer possible to speak, in any absolute sense, of simultaneous states at different locations in the aether, the aether became, as it were, four-dimensional since there was no objective way of ordering its states by time alone". Now the "aether of special relativity" is still "absolute", because matter is affected by the properties of the aether, but the aether is not affected by the presence of matter. This asymmetry was solved within general relativity. Einstein explained that the "aether of general relativity" is not absolute, because matter is influenced by the aether, just as matter influences the structure of the aether. The only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics. == Aether concepts == Aether theories Aether (classical element) Aether drag hypothesis Astral light == See also == == References == Footnotes Citations === Primary sources === === Experiments === === Secondary sources === == External links == Harry Bateman (1915) The Structure of the Aether, Bulletin of the American Mathematical Society 21(6):299–309. Decaen, Christopher A. (2004), "Aristotle's Aether and Contemporary Science", The Thomist, 68 (3): 375–429, doi:10.1353/tho.2004.0015, S2CID 171374696, archived from the original on 2012-03-05, retrieved 2011-03-05. The Aether of Space Archived 2017-09-13 at the Wayback Machine – Lord Rayleigh's address ScienceWeek Theoretical Physics: On the Aether and Broken Symmetry The New Student's Reference Work/Ether
Wikipedia/Plenum_(physics)
In metaphysics, extension signifies both 'stretching out' (Latin: extensio) as well as later 'taking up space', and most recently, spreading one's internal mental cognition into the external world. The history of thinking about extension can be traced back at least to Archytas' spear analogy for the infinity of space. How far can one's hand or spear stretch out until it reaches the edge of reality? "If I arrived at the outermost edge of the heaven, could I extend my hand or staff into what is outside or not? It would be paradoxical [given our normal assumptions about the nature of space] not to be able to extend it." == History == === Descartes === René Descartes defined extension as the property of existing in more than one dimension, a property that was later followed up in Grassmann's n-dimensional algebra. For Descartes, the primary characteristic of matter is extension (res extensa), just as the primary characteristic of mind is thought (res cogitans). === Newton === After rejecting Cartesian identification of body with extension, Newton turns to the question of what the nature of the "immobile being"—space or extension itself, distinguished from body—was. He raises three possible definitions for extension: as a kind of substance; or as a kind of accident (a standard philosophical term for attribute: anything that can be predicated of substance); or "simply nothing" (a reference to atomism), all of which he repudiates. Instead he proposes that extension "has a certain mode of existence of its own, which agrees neither with substances nor accidents." After struggling with this question, Newton provides perhaps one of the clearest definitions of extension If we say with Descartes that extension is body, do we not manifestly offer a path to Atheism, both because extension is not a creature but has existed eternally, and because we have an absolute Idea of it without any relationship to God, and therefore we are able to conceive of it as existent while feigning the non-existence of God? which lead Stein to conclude Newton's conception of Space, the existence of space, or extension, follows from that of anything whatsoever; but extension does not require a subject in which it "inheres", as a property; and it can be conceived as existent without presupposing any particular thing, God included. On the other hand, it is an "affection of every being." === Locke === John Locke, in An Essay Concerning Human Understanding, defined extension as "only the Space that lies between the Extremities of those solid coherent Parts" of a body. It is the space possessed by a body. Locke refers to the extension in conjunction with solidity and impenetrability, the other primary characteristics of matter. === Spinoza === Extension also plays an important part in the philosophy of Baruch Spinoza, who says that substance (that which has extension) can be limited only by substance of the same sort, i.e. matter cannot be limited by ideas and vice versa. From this principle, he determines that substance is infinite. This infinite substance is what Spinoza calls God, or better yet nature, and it possesses both unlimited extension and unlimited consciousness. == Infinite divisibility == Infinite divisibility refers to the idea that extension, or quantity, when divided and further divided infinitely, cannot reach the point of zero quantity. It can be divided into very small or negligible quantity but not zero or no quantity at all. Using a mathematical approach, specifically geometric models, Gottfried Leibniz and Descartes discussed the infinite divisibility of extension. Actual divisibility may be limited due to unavailability of cutting instruments, but its possibility of breaking into smaller pieces is infinite. == Compenetration == Compenetration refers to two or more extensions occupying the same space at the same time. This, according to scholastic philosophers, is impossible; according to this view, only spirits or spiritualized matter can occupy a place occupied already by an entity (matter or spirit) == Extended mind thesis == In more recent work, philosophers David Chalmers and Andy Clark in 1998 published "The Extended Mind." This has opened a wide channel of new research at the nexus of epistemology, philosophy of mind, cognitive and neuro-science, dynamic systems thinking, science, technology & innovation studies. == See also == Mass Mass generation Higgs mechanism == References ==
Wikipedia/Extension_(metaphysics)
Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. == Reflection of light == Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them. A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass. In the diagram, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection. In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle. Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector. When light reflects off a material with higher refractive index than the medium in which is traveling, it undergoes a 180° phase shift. In contrast, when light reflects off a material with lower refractive index the reflected light is in phase with the incident light. This is an important principle in the field of thin-film optics. Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic. === Laws of reflection === If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows: The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal. The reflected ray and the incident ray are on the opposite sides of the normal. These three laws can all be derived from the Fresnel equations. ==== Mechanism ==== In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle. In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons. In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π (180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light. Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter. === Diffuse reflection === When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law. The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation. === Retroreflection === Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came. When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet. Some animals' retinas act as retroreflectors (see tapetum lucidum for more detail), as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight. A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes. === Multiple reflections === When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus. Note that these are theoretical ideals, requiring perfect alignment of perfectly smooth, perfectly flat perfect reflectors that absorb none of the light. In practice, these situations can only be approached but not achieved because the effects of any surface imperfections in the reflectors propagate and magnify, absorption gradually extinguishes the image, and any observing equipment (biological or technological) will interfere. === Complex conjugate reflection === In this process (which is also known as phase conjugation), light bounces exactly back in the direction from which it came due to a nonlinear optical process. Not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. If one were to look into a complex conjugating mirror, it would be black because only the photons which left the pupil would reach the pupil. == Other types of reflection == === Neutron reflection === Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off atoms within a material is commonly used to determine the material's internal structure. === Sound reflection === When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. Sound reflection can affect the acoustic space. === Seismic reflection === Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits. === Time reflections === Scientists have speculated that there could be time reflections. Scientists from the Advanced Science Research Center at the CUNY Graduate Center report that they observed time reflections by sending broadband signals into a strip of metamaterial filled with electronic switches. The "time reflections" in electromagnetic waves are discussed in a 2023 paper published in the journal Nature Physics. == See also == == References == == External links == Acoustic reflection Archived 2019-01-04 at the Wayback Machine Animations demonstrating optical reflection by QED Simulation on Laws of Reflection of Sound By Amrita University
Wikipedia/Reflection_(physics)
In contract law, force majeure ( FORSS mə-ZHUR; French: [fɔʁs maʒœʁ]) is a common clause in contracts which essentially frees both parties from liability or obligation when an extraordinary event or circumstance beyond the control of the parties, such as a war, strike, riot, crime, epidemic, or sudden legal change prevents one or both parties from fulfilling their obligations under the contract. Force majeure often includes events described as an act of God, though such events remain legally distinct from the clause itself. In practice, most force majeure clauses do not entirely excuse a party's non-performance but suspend it for the duration of the force majeure. Force majeure is generally intended to include occurrences beyond the reasonable control of a party, and therefore would not cover: Any result of the negligence or malfeasance of a party, which has a materially adverse effect on the ability of such party to perform its obligations. Any result of the usual and natural consequences of external forces. To illuminate this distinction, take the example of an outdoor public event abruptly called off: If the cause for cancellation is ordinary predictable rain, this is most probably not force majeure. If the cause is a flash flood that damages the venue or makes the event hazardous to attend, then this almost certainly is force majeure, other than where the venue was on a known flood plain or the area of the venue was known to be subject to torrential rain. Some causes might be arguable borderline cases (for instance, if unusually heavy rain occurred, rendering the event significantly more difficult, but not impossible, to safely hold or attend); these must be assessed in light of the circumstances. Any circumstances that are specifically contemplated (included) in the contract—for example, if the contract for the outdoor event specifically permits or requires cancellation in the event of rain. Under international law, it refers to an irresistible force or unforeseen event beyond the control of a state, making it materially impossible to fulfill an international obligation. Accordingly, it is related to the concept of a state of emergency. Force majeure in any given situation is controlled by the law governing the contract, rather than general concepts of force majeure. Contracts often specify what constitutes force majeure via a clause in the agreement. So, the liability is decided per contract and neither by statute nor principles of general law. The first step to assess whether—and how—force majeure applies to any particular contract is to ascertain the law of the country (state) which governs the contract. == Purpose == Time-critical and other sensitive contracts may be drafted to limit the shield of this clause where a party does not take reasonable steps (or specific precautions) to prevent or limit the effects of the outside interference, either when they become likely or when they actually occur. A force majeure may work to excuse all or part of the obligations of one or both parties. For example, a strike might prevent timely delivery of goods, but not timely payment for the portion delivered. A force majeure may also be the overpowering force itself, which prevents the fulfillment of a contract. In that instance, it is actually the impossibility or impracticability defenses. In the military, "force majeure" has a slightly different meaning. It refers to an event, either external or internal, that happens to a vessel or aircraft that allows it to enter normally restricted areas without penalty. An example would be the Hainan Island incident where a U.S. Navy aircraft landed at a Chinese military airbase after a collision with a Chinese fighter in April 2001. Under the principle of force majeure, the aircraft was allowed to land without interference. Similarly, the 2023 Chinese balloon incident in which a Chinese surveillance balloon was discovered in US air space, the Chinese government stated that this "was entirely an accident caused by force majeure". The importance of the force majeure clause in a contract, particularly one of any length in time, cannot be overstated as it relieves a party from an obligation under the contract (or suspends that obligation). What is permitted to be a force majeure event or circumstance can be the source of much controversy in the negotiation of a contract and a party should generally resist any attempt by the other party to include something that should, fundamentally, be at the risk of that other party. For example, in a coal-supply agreement, the mining company may seek to have "geological risk" included as a force majeure event; however, the mining company should be doing extensive exploration and analysis of its geological reserves and should not even be negotiating a coal-supply agreement if it cannot take the risk that there may be a geological limit to its coal supply from time to time. The outcome of that negotiation, of course, depends on the relative bargaining power of the parties and there will be cases where force majeure clauses can be used by a party effectively to escape liability for bad performance. Because of the different interpretations of force majeure across legal systems, it is common for contracts to include specific definitions of force majeure, particularly at the international level. Some systems limit force majeure to an Act of God (such as floods, earthquakes, hurricanes, etc.) but exclude human or technical failures (such as acts of war, terrorist activities, labor disputes, or interruption or failure of electricity or communications systems). The advisory point is in drafting of contract make distinction between act of God and other shape of force majeure. As a consequence, force majeure in areas prone to natural disaster requires a definition of the magnitude of the event for which force majeure could be considered as such in a contract. As an example, in a highly seismic area a technical definition of the amplitude of motion at the site could be established on the contract, based for example on probability of occurrence studies. This parameter or parameters can later be monitored at the construction site (with a commonly agreed procedure). An earthquake could be a small shaking or damaging event. The occurrence of an earthquake does not imply the occurrence of damage or disruption. For small and moderate events it is reasonable to establish requirements for the contract processes; for large events it is not always feasible or economical to do so. Concepts such as 'damaging earthquake' in force majeure clauses do not help to clarify disruption, especially in areas where there are no other reference structures or most structures are not seismically safe. == Common law == === Hong Kong === When force majeure has not been provided for in the contract (or the relevant event does not fall within the scope of the force majeure clause), and a supervening event prevents performance, it will be a breach of contract. The law of frustration will be the sole remaining course available to the party in default to end the contract. If the failure to perform the contract deprives the innocent party of substantially the whole benefit of the contract it will be a repudiatory breach, entitling the innocent party to terminate the contract and claim damages for that repudiatory breach. === England === As interpreted by English courts, the phrase force majeure has a more extensive meaning than "act of God" or vis major. Judges have agreed that strikes and breakdowns of machinery, which though normally not included in vis major, are included in force majeure. (However, in the case of machinery breakdown, negligent lack of maintenance may negate claims of force majeure, as maintenance or its lack is within the owner's sphere of control.) The term cannot, however, be extended to cover delays caused by bad weather, football matches, or a funeral: the English case of Matsoukis v. Priestman & Co (1915) held that "these are the usual incidents interrupting work, and the defendants, in making their contract, no doubt took them into account.... The words 'force majeure' are not words which we generally find in an English contract. They are taken from the Code Napoleon, and they were inserted by this Romanian gentleman or by his advisers, who were no doubt familiar with their use on the Continent." In Hackney Borough Council v. Dore (1922) it was held that "The expression means some physical or material restraint and does not include a reasonable fear or apprehension of such a restraint". === India === In re Dharnrajmal Gobindram v. Shamji Kalidas [All India Reporter 1961 Supreme Court (of India) 1285], it was held that "An analysis of ruling on the subject shows that reference to the expression is made where the intention is to save the defaulting party from the consequences of anything over which he had no control." Even if a force majeure clause covers the relevant supervening event, the party unable to perform will not have the benefit of the clause where performance merely become (1) more difficult, (2) more expensive, and/or (3) less profitable. === United States === For example, parties in the United States have used the COVID-19 pandemic as a force majeure in an attempt to escape contractual liability by applying the elements of an (1) unforeseeable event, (2) outside of the parties’ control, that (3) renders performance impossible or impractical. Though force majeure events are generally thought to include natural events like tornadoes and often unforeseeable man-made events like labor strikes, the 2021–2023 Inflation Surge is also impacting force majeure provisions in leasing and other real estate contracts to include delays or excuses from performing contractual obligations due to the increased costs from rising inflation and rising interest rates. == Civil law == === France === For a defendant to invoke force majeure in French law, the event proposed as force majeure must pass three tests: Externality: The defendant must have nothing to do with the event's happening. Unpredictability: If the event could be foreseen, the defendant is obligated to have prepared for it. Being unprepared for a foreseeable event leaves the defendant culpable. This standard is very strictly applied: CE 9 April 1962, "Chais d’Armagnac": The Council of State adjudged that, since a flood had occurred 69 years before the one that caused the damage at issue, the latter flood was predictable. Administrative Court of Grenoble, 19 June 1974, "Dame Bosvy": An avalanche was judged to be predictable since another had occurred around 50 years before. Irresistibility: The consequences of the event must have been unpreventable. Other events that are candidates for force majeure in French law are hurricanes and earthquakes. Force majeure is a defense against liability and is applicable throughout French law. Force majeure and cas fortuit are distinct notions in French law. === Argentina === In Argentina, force majeure (fuerza mayor and caso fortuito) is defined by the Civil Code of Argentina in Article 512, and regulated in Article 513. According to these articles, force majeure is defined by the following characteristics: an event that could not have been foreseen or if it could, an event that could not be resisted. From these, it can be said that some acts of nature can be predicted, but if their consequences cannot be resisted it can be considered force majeure. externality: the victim was not related directly or indirectly to the causes of the event, e.g., if the act was a fire, or a strike unpredictability: the event must had been originated after the cause of the obligation. irresistibility: the victim cannot by any means overcome the effects. In Argentina, Act of God can be used in Civil Responsibility regarding contractual or noncontractual obligations. == Hybrid law systems == === Philippines === As the oldest state with a size of over 300,000 sq km to integrate the two legal systems, the Philippines also has its own unique interpretation of force majeure events. Under the Civil Code in Article 1174, Except in cases specified by the law, or when it is otherwise declared by stipulation, or when the nature of the obligation requires the assumption of risk, no person shall be responsible for those events which, could not be foreseen, or which, though foreseen were inevitable. Fortuitous events must not be caused by man but by nature. Therefore, economic crises are not considered as force majeure events that allows a debtor to be free of his obligation or debt. However such crises as an effect of wars such as World War II are considered as force majeure events as stated in Sagrada v. Nacoco (G.R. No. L-3756). The landmark case on this article and event is the case of Nakpil & Sons v. CA (G.R. No. L-47851). In this case, the Philippine Bar Association (PBA) building was the only building destroyed on Arzobispo St., Intramuros, Manila during an earthquake in 1968. The PBA, through the Jose W. Diokno Law Office, led by Sen. Diokno himself, sued Nakpil & Sons as well as the contractor of the building, United Construction Company, Inc., and won in the trial court. The case was merely reiterated and affirmed by the Court of Appeals. Finally in 1986 the case was decided with finality by the Supreme Court. The Court mentioned the four requisites, by breaking down Article 1174. These are still the requisites used in Philippine courts today. These requisites are: In doing so, the Supreme Court ruled that there is no fortuitous event, after also observing certain problems in construction such as measurement deficiencies and poor foundations. == UNIDROIT Principles == Article 7.1.7 of the UNIDROIT Principles of International Commercial Contracts provides for a form of force majeure similar, but not identical, to the common law and civil law concepts of the term: relief from performance is granted "if that party proves that the non-performance was due to an impediment beyond its control and that it could not reasonably be expected to have taken the impediment into account at the time of the conclusion of the contract or to have avoided or overcome it or its consequences." == See also == Act of God Vis major Contract law Hardship clause Hell or high water clause Impossibility of performance Mutual assent Substantial performance Clausula rebus sic stantibus == References == == Sources == Mitra's Legal & Commercial Dictionary. Pages 350–351. 4th Edn. Eastern Law House. ISBN 978-81-7177-015-1. International Business Law and Its Environment. Schaffer, Agusti, Earle. Page 154. 7th Edn. 2008. South-Western Legal Studies in Business Academic. ISBN 978-0-324-64967-3. == External links == (in Spanish) Force Majeure Construction and Earthquakes Sample Force Majeure Clauses (World Bank)
Wikipedia/Force_majeure
The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find the best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures. Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists of mathematical statistics, and is closely linked to probability theory, to utility theory, and to optimization. == Scope == Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used in applied statistics. === Modelling === Statistical models describe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds: Sampling from a finite population Measuring observational error and refining procedures Studying statistical relations Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets. === Data collection === Statistical theory provides a guide to comparing methods of data collection, where the problem is to generate informative data using optimization and randomization while measuring and controlling for observational error. Optimization of data collection reduces the cost of data while satisfying statistical goals, while randomization allows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of: Design of experiments to estimate treatment effects, to test hypotheses, and to optimize responses. Survey sampling to describe populations === Summarising data === The task of summarising statistical data in conventional forms (also known as descriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include: Choosing summary statistics to describe a sample Summarising probability distributions of sample data while making limited assumptions about the form of distribution that may be met Summarising the relationships between different quantities measured on the same items with a sample === Interpreting data === Besides the philosophy underlying statistical inference, statistical theory has the task of considering the types of questions that data analysts might want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are: Summarising populations in the form of a fitted distribution or probability density function Summarising the relationship between variables using some type of regression analysis Providing ways of predicting the outcome of a random quantity given other related variables Examining the possibility of reducing the number of variables being considered within a problem (the task of Dimension reduction) When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc observational data is often worse than for planned randomized generation of data. === Applied statistical inference === Statistical theory provides the basis for a number of data-analytic approaches that are common across scientific and social research. Interpreting data is done with one of the following approaches: Estimating parameters Providing a range of values instead of a point estimate Testing statistical hypotheses Many of the standard methods for those approaches rely on certain statistical assumptions (made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range of robust statistical techniques that are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a given data set. == See also == List of statistical topics Foundations of statistics == References == === Citations === === Sources === == Further reading == Peirce, C. S. (1876), "Note on the Theory of the Economy of Research" in Coast Survey Report, pp. 197–201 (Appendix No. 14), NOAA PDF Eprint. Reprinted 1958 in Collected Papers of Charles Sanders Peirce 7, paragraphs 139–157 and in 1967 in Operations Research 15 (4): pp. 643–648, Abstract from JSTOR. (1967) Peirce, C. S. (1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643. (1877–1878), "Illustrations of the Logic of Science" (1883), "A Theory of Probable Inference" and Jastrow, Joseph (1885), "On Small Differences in Sensation" in Memoirs of the National Academy of Sciences 3: pp. 73–83. Eprint. Bickel, Peter J. & Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics. Vol. I (Second (updated printing 2007) ed.). Pearson Prentice-Hall. ISBN 0-13-850363-X. Davison, A.C. (2003) Statistical Models. Cambridge University Press. ISBN 0-521-77339-3 Lehmann, Erich (1983). Theory of Point Estimation. Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer. ISBN 978-0-387-73193-3. == External links == Media related to Statistical theory at Wikimedia Commons
Wikipedia/Statistical_theory
In statistics, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size n may grow indefinitely; the properties of estimators and tests are then evaluated under the limit of n → ∞. In practice, a limit evaluation is considered to be approximately valid for large finite sample sizes too. == Overview == Most statistical problems begin with a dataset of size n. The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, thus that the sample size grows infinitely, i.e. n → ∞. Under the assumption, many results can be obtained that are unavailable for samples of finite size. An example is the weak law of large numbers. The law states that for a sequence of independent and identically distributed (IID) random variables X1, X2, ..., if one value is drawn from each random variable and the average of the first n values is computed as Xn, then the Xn converge in probability to the population mean E[Xi] as n → ∞. In asymptotic theory, the standard approach is n → ∞. For some statistical models, slightly different approaches of asymptotics may be used. For example, with panel data, it is commonly assumed that one dimension in the data remains fixed, whereas the other dimension grows: T = constant and N → ∞, or vice versa. Besides the standard approach to asymptotics, other alternative approaches exist: Within the local asymptotic normality framework, it is assumed that the value of the "true parameter" in the model varies slightly with n, such that the n-th model corresponds to θn = θ + h/√n . This approach lets us study the regularity of estimators. When statistical tests are studied for their power to distinguish against the alternatives that are close to the null hypothesis, it is done within the so-called "local alternatives" framework: the null hypothesis is H0: θ = θ0 and the alternative is H1: θ = θ0 + h/√n . This approach is especially popular for the unit root tests. There are models where the dimension of the parameter space Θn slowly expands with n, reflecting the fact that the more observations there are, the more structural effects can be feasibly incorporated in the model. In kernel density estimation and kernel regression, an additional parameter is assumed—the bandwidth h. In those models, it is typically taken that h → 0 as n → ∞. The rate of convergence must be chosen carefully, though, usually h ∝ n−1/5. In many cases, highly accurate results for finite samples can be obtained via numerical methods (i.e. computers); even in such cases, though, asymptotic analysis can be useful. This point was made by Small (2010, §1.4), as follows. A primary goal of asymptotic analysis is to obtain a deeper qualitative understanding of quantitative tools. The conclusions of an asymptotic analysis often supplement the conclusions which can be obtained by numerical methods. == Modes of convergence of random variables == == Asymptotic properties == === Estimators === ==== Consistency ==== A sequence of estimates is said to be consistent, if it converges in probability to the true value of the parameter being estimated: θ ^ n → p θ 0 . {\displaystyle {\hat {\theta }}_{n}\ {\xrightarrow {\overset {}{p}}}\ \theta _{0}.} That is, roughly speaking with an infinite amount of data the estimator (the formula for generating the estimates) would almost surely give the correct result for the parameter being estimated. ==== Asymptotic distribution ==== If it is possible to find sequences of non-random constants {an}, {bn} (possibly depending on the value of θ0), and a non-degenerate distribution G such that b n ( θ ^ n − a n ) → d G , {\displaystyle b_{n}({\hat {\theta }}_{n}-a_{n})\ {\xrightarrow {d}}\ G,} then the sequence of estimators θ ^ n {\displaystyle \textstyle {\hat {\theta }}_{n}} is said to have the asymptotic distribution G. Most often, the estimators encountered in practice are asymptotically normal, meaning their asymptotic distribution is the normal distribution, with an = θ0, bn = √n, and G = N(0, V): n ( θ ^ n − θ 0 ) → d N ( 0 , V ) . {\displaystyle {\sqrt {n}}({\hat {\theta }}_{n}-\theta _{0})\ {\xrightarrow {d}}\ {\mathcal {N}}(0,V).} ==== Asymptotic confidence regions ==== == Asymptotic theorems == Central limit theorem Continuous mapping theorem Glivenko–Cantelli theorem Law of large numbers Law of the iterated logarithm Slutsky's theorem Delta method == See also == Asymptotic analysis Exact statistics Large deviations theory == References == == Bibliography ==
Wikipedia/Asymptotic_theory_(statistics)
Quality control (QC) is a process by which entities review the quality of all factors involved in production. ISO 9000 defines quality control as "a part of quality management focused on fulfilling quality requirements". This approach places emphasis on three aspects (enshrined in standards such as ISO 9001): Elements such as controls, job management, defined and well managed processes, performance and integrity criteria, and identification of records Competence, such as knowledge, skills, experience, and qualifications Soft elements, such as personnel, integrity, confidence, organizational culture, motivation, team spirit, and quality relationships. Inspection is a major component of quality control, where physical product is examined visually (or the end results of a service are analyzed). Product inspectors will be provided with lists and descriptions of unacceptable product defects such as cracks or surface blemishes for example. == History and introduction == Early stone tools such as anvils had no holes and were not designed as interchangeable parts. Mass production established processes for the creation of parts and system with identical dimensions and design, but these processes are not uniform and hence some customers were unsatisfied with the result. Quality control separates the act of testing products to uncover defects from the decision to allow or deny product release, which may be determined by fiscal constraints. For contract work, particularly work awarded by government agencies, quality control issues are among the top reasons for not renewing a contract. The simplest form of quality control was a sketch of the desired item. If the sketch did not match the item, it was rejected, in a simple Go/no go procedure. However, manufacturers soon found it was difficult and costly to make parts be exactly like their depiction; hence around 1840 tolerance limits were introduced, wherein a design would function if its parts were measured to be within the limits. Quality was thus precisely defined using devices such as plug gauges and ring gauges. However, this did not address the problem of defective items; recycling or disposing of the waste adds to the cost of production, as does trying to reduce the defect rate. Various methods have been proposed to prioritize quality control issues and determine whether to leave them unaddressed or use quality assurance techniques to improve and stabilize production. == Notable approaches == There is a tendency for individual consultants and organizations to name their own unique approaches to quality control—a few of these have ended up in widespread use: == In project management == In project management, quality control requires the project manager and/or the project team to inspect the accomplished work to ensure its alignment with the project scope. In practice, projects typically have a dedicated quality control team which focuses on this area. == See also == Analytical quality control Corrective and preventative action (CAPA) Eight dimensions of quality First article inspection (FAI) Good automated manufacturing practice (GAMP) Good manufacturing practice Quality assurance Quality management framework Standard operating procedure (SOP) QA/QC == References == This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022. (in support of MIL-STD-188). == Further reading == Radford, George S. (1922), The Control of Quality in Manufacturing, New York: Ronald Press Co., OCLC 1701274, retrieved 16 November 2013 Shewhart, Walter A. (1931), Economic Control of Quality of Manufactured Product, New York: D. Van Nostrand Co., Inc., OCLC 1045408 Juran, Joseph M. (1951), Quality-Control Handbook, New York: McGraw-Hill, OCLC 1220529 Western Electric Company (1956), Statistical Quality Control Handbook (1 ed.), Indianapolis, Indiana: Western Electric Co., OCLC 33858387 Feigenbaum, Armand V. (1961), Total Quality Control, New York: McGraw-Hill, OCLC 567344 == External links == ASTM quality control standards
Wikipedia/Quality_control
Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Take the typical supply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price. Simultaneity poses challenges for the estimation of the statistical parameters of interest, because the Gauss–Markov assumption of strict exogeneity of the regressors is violated. And while it would be natural to estimate all simultaneous equations at once, this often leads to a computationally costly non-linear optimization problem even for the simplest system of linear equations. This situation prompted the development, spearheaded by the Cowles Commission in the 1940s and 1950s, of various techniques that estimate each equation in the model seriatim, most notably limited information maximum likelihood and two-stage least squares. == Structural and reduced form == Suppose there are m regression equations of the form y i t = y − i , t ′ γ i + x i t ′ β i + u i t , i = 1 , … , m , {\displaystyle y_{it}=y_{-i,t}'\gamma _{i}+x_{it}'\;\!\beta _{i}+u_{it},\quad i=1,\ldots ,m,} where i is the equation number, and t = 1, ..., T is the observation index. In these equations xit is the ki×1 vector of exogenous variables, yit is the dependent variable, y−i,t is the ni×1 vector of all other endogenous variables which enter the ith equation on the right-hand side, and uit are the error terms. The “−i” notation indicates that the vector y−i,t may contain any of the y’s except for yit (since it is already present on the left-hand side). The regression coefficients βi and γi are of dimensions ki×1 and ni×1 correspondingly. Vertically stacking the T observations corresponding to the ith equation, we can write each equation in vector form as y i = Y − i γ i + X i β i + u i , i = 1 , … , m , {\displaystyle y_{i}=Y_{-i}\gamma _{i}+X_{i}\beta _{i}+u_{i},\quad i=1,\ldots ,m,} where yi and ui are T×1 vectors, Xi is a T×ki matrix of exogenous regressors, and Y−i is a T×ni matrix of endogenous regressors on the right-hand side of the ith equation. Finally, we can move all endogenous variables to the left-hand side and write the m equations jointly in vector form as Y Γ = X B + U . {\displaystyle Y\Gamma =X\mathrm {B} +U.\,} This representation is known as the structural form. In this equation Y = [y1 y2 ... ym] is the T×m matrix of dependent variables. Each of the matrices Y−i is in fact an ni-columned submatrix of this Y. The m×m matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each column i are either the components of the vector −γi or zeros, depending on which columns of Y were included in the matrix Y−i. The T×k matrix X contains all exogenous regressors from all equations, but without repetitions (that is, matrix X should be of full rank). Thus, each Xi is a ki-columned submatrix of X. Matrix Β has size k×m, and each of its columns consists of the components of vectors βi and zeros, depending on which of the regressors from X were included or excluded from Xi. Finally, U = [u1 u2 ... um] is a T×m matrix of the error terms. Postmultiplying the structural equation by Γ −1, the system can be written in the reduced form as Y = X B Γ − 1 + U Γ − 1 = X Π + V . {\displaystyle Y=X\mathrm {B} \Gamma ^{-1}+U\Gamma ^{-1}=X\Pi +V.\,} This is already a simple general linear model, and it can be estimated for example by ordinary least squares. Unfortunately, the task of decomposing the estimated matrix Π ^ {\displaystyle \scriptstyle {\hat {\Pi }}} into the individual factors Β and Γ −1 is quite complicated, and therefore the reduced form is more suitable for prediction but not inference. === Assumptions === Firstly, the rank of the matrix X of exogenous regressors must be equal to k, both in finite samples and in the limit as T → ∞ (this later requirement means that in the limit the expression 1 T X ′ X {\displaystyle \scriptstyle {\frac {1}{T}}X'\!X} should converge to a nondegenerate k×k matrix). Matrix Γ is also assumed to be non-degenerate. Secondly, error terms are assumed to be serially independent and identically distributed. That is, if the tth row of matrix U is denoted by u(t), then the sequence of vectors {u(t)} should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies that E[U] = 0, and E[U′U] = T Σ. Lastly, assumptions are required for identification. == Identification == The identification conditions require that the system of linear equations be solvable for the unknown parameters. More specifically, the order condition, a necessary condition for identification, is that for each equation ki + ni ≤ k, which can be phrased as “the number of excluded exogenous variables is greater or equal to the number of included endogenous variables”. The rank condition, a stronger condition which is necessary and sufficient, is that the rank of Πi0 equals ni, where Πi0 is a (k − ki)×ni matrix which is obtained from Π by crossing out those columns which correspond to the excluded endogenous variables, and those rows which correspond to the included exogenous variables. === Using cross-equation restrictions to achieve identification === In simultaneous equations models, the most common method to achieve identification is by imposing within-equation parameter restrictions. Yet, identification is also possible using cross equation restrictions. To illustrate how cross equation restrictions can be used for identification, consider the following example from Wooldridge y 1 = γ 12 y 2 + δ 11 z 1 + δ 12 z 2 + δ 13 z 3 + u 1 y 2 = γ 21 y 1 + δ 21 z 1 + δ 22 z 2 + u 2 {\displaystyle {\begin{aligned}y_{1}&=\gamma _{12}y_{2}+\delta _{11}z_{1}+\delta _{12}z_{2}+\delta _{13}z_{3}+u_{1}\\y_{2}&=\gamma _{21}y_{1}+\delta _{21}z_{1}+\delta _{22}z_{2}+u_{2}\end{aligned}}} where z's are uncorrelated with u's and y's are endogenous variables. Without further restrictions, the first equation is not identified because there is no excluded exogenous variable. The second equation is just identified if δ13≠0, which is assumed to be true for the rest of discussion. Now we impose the cross equation restriction of δ12=δ22. Since the second equation is identified, we can treat δ12 as known for the purpose of identification. Then, the first equation becomes: y 1 − δ 12 z 2 = γ 12 y 2 + δ 11 z 1 + δ 13 z 3 + u 1 {\displaystyle y_{1}-\delta _{12}z_{2}=\gamma _{12}y_{2}+\delta _{11}z_{1}+\delta _{13}z_{3}+u_{1}} Then, we can use (z1, z2, z3) as instruments to estimate the coefficients in the above equation since there are one endogenous variable (y2) and one excluded exogenous variable (z2) on the right hand side. Therefore, cross equation restrictions in place of within-equation restrictions can achieve identification. == Estimation == === Two-stage least squares (2SLS) === The simplest and the most common estimation method for the simultaneous equations model is the so-called two-stage least squares method, developed independently by Theil (1953) and Basmann (1957). It is an equation-by-equation technique, where the endogenous regressors on the right-hand side of each equation are being instrumented with the regressors X from all other equations. The method is called “two-stage” because it conducts estimation in two steps: Step 1: Regress Y−i on X and obtain the predicted values Y ^ − i {\displaystyle \scriptstyle {\hat {Y}}_{\!-i}} ; Step 2: Estimate γi, βi by the ordinary least squares regression of yi on Y ^ − i {\displaystyle \scriptstyle {\hat {Y}}_{\!-i}} and Xi. If the ith equation in the model is written as y i = ( Y − i X i ) ( γ i β i ) + u i ≡ Z i δ i + u i , {\displaystyle y_{i}={\begin{pmatrix}Y_{-i}&X_{i}\end{pmatrix}}{\begin{pmatrix}\gamma _{i}\\\beta _{i}\end{pmatrix}}+u_{i}\equiv Z_{i}\delta _{i}+u_{i},} where Zi is a T×(ni + ki) matrix of both endogenous and exogenous regressors in the ith equation, and δi is an (ni + ki)-dimensional vector of regression coefficients, then the 2SLS estimator of δi will be given by δ ^ i = ( Z ^ i ′ Z ^ i ) − 1 Z ^ i ′ y i = ( Z i ′ P Z i ) − 1 Z i ′ P y i , {\displaystyle {\hat {\delta }}_{i}={\big (}{\hat {Z}}'_{i}{\hat {Z}}_{i}{\big )}^{-1}{\hat {Z}}'_{i}y_{i}={\big (}Z'_{i}PZ_{i}{\big )}^{-1}Z'_{i}Py_{i},} where P = X (X ′X)−1X ′ is the projection matrix onto the linear space spanned by the exogenous regressors X. === Indirect least squares === Indirect least squares is an approach in econometrics where the coefficients in a simultaneous equations model are estimated from the reduced form model using ordinary least squares. For this, the structural system of equations is transformed into the reduced form first. Once the coefficients are estimated the model is put back into the structural form. === Limited information maximum likelihood (LIML) === The “limited information” maximum likelihood method was suggested by M. A. Girshick in 1947, and formalized by T. W. Anderson and H. Rubin in 1949. It is used when one is interested in estimating a single structural equation at a time (hence its name of limited information), say for observation i: y i = Y − i γ i + X i β i + u i ≡ Z i δ i + u i {\displaystyle y_{i}=Y_{-i}\gamma _{i}+X_{i}\beta _{i}+u_{i}\equiv Z_{i}\delta _{i}+u_{i}} The structural equations for the remaining endogenous variables Y−i are not specified, and they are given in their reduced form: Y − i = X Π + U − i {\displaystyle Y_{-i}=X\Pi +U_{-i}} Notation in this context is different than for the simple IV case. One has: Y − i {\displaystyle Y_{-i}} : The endogenous variable(s). X − i {\displaystyle X_{-i}} : The exogenous variable(s) X {\displaystyle X} : The instrument(s) (often denoted Z {\displaystyle Z} ) The explicit formula for the LIML is: δ ^ i = ( Z i ′ ( I − λ M ) Z i ) − 1 Z i ′ ( I − λ M ) y i , {\displaystyle {\hat {\delta }}_{i}={\Big (}Z'_{i}(I-\lambda M)Z_{i}{\Big )}^{\!-1}Z'_{i}(I-\lambda M)y_{i},} where M = I − X (X ′X)−1X ′, and λ is the smallest characteristic root of the matrix: ( [ y i Y − i ] M i [ y i Y − i ] ) ( [ y i Y − i ] M [ y i Y − i ] ) − 1 {\displaystyle {\Big (}{\begin{bmatrix}y_{i}\\Y_{-i}\end{bmatrix}}M_{i}{\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}{\Big )}{\Big (}{\begin{bmatrix}y_{i}\\Y_{-i}\end{bmatrix}}M{\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}{\Big )}^{\!-1}} where, in a similar way, Mi = I − Xi (Xi′Xi)−1Xi′. In other words, λ is the smallest solution of the generalized eigenvalue problem, see Theil (1971, p. 503): | [ y i Y − i ] ′ M i [ y i Y − i ] − λ [ y i Y − i ] ′ M [ y i Y − i ] | = 0 {\displaystyle {\Big |}{\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}'M_{i}{\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}-\lambda {\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}'M{\begin{bmatrix}y_{i}&Y_{-i}\end{bmatrix}}{\Big |}=0} ==== K class estimators ==== The LIML is a special case of the K-class estimators: δ ^ = ( Z ′ ( I − κ M ) Z ) − 1 Z ′ ( I − κ M ) y , {\displaystyle {\hat {\delta }}={\Big (}Z'(I-\kappa M)Z{\Big )}^{\!-1}Z'(I-\kappa M)y,} with: δ = [ β i γ i ] {\displaystyle \delta ={\begin{bmatrix}\beta _{i}&\gamma _{i}\end{bmatrix}}} Z = [ X i Y − i ] {\displaystyle Z={\begin{bmatrix}X_{i}&Y_{-i}\end{bmatrix}}} Several estimators belong to this class: κ=0: OLS κ=1: 2SLS. Note indeed that in this case, I − κ M = I − M = P {\displaystyle I-\kappa M=I-M=P} the usual projection matrix of the 2SLS κ=λ: LIML κ=λ - α / (n-K): Fuller (1977) estimator. Here K represents the number of instruments, n the sample size, and α a positive constant to specify. A value of α=1 will yield an estimator that is approximately unbiased. === Three-stage least squares (3SLS) === The three-stage least squares estimator was introduced by Zellner & Theil (1962). It can be seen as a special case of multi-equation GMM where the set of instrumental variables is common to all equations. If all regressors are in fact predetermined, then 3SLS reduces to seemingly unrelated regressions (SUR). Thus it may also be seen as a combination of two-stage least squares (2SLS) with SUR. == Applications in social science == Across fields and disciplines simultaneous equation models are applied to various observational phenomena. These equations are applied when phenomena are assumed to be reciprocally causal. The classic example is supply and demand in economics. In other disciplines there are examples such as candidate evaluations and party identification or public opinion and social policy in political science; road investment and travel demand in geography; and educational attainment and parenthood entry in sociology or demography. The simultaneous equation model requires a theory of reciprocal causality that includes special features if the causal effects are to be estimated as simultaneous feedback as opposed to one-sided 'blocks' of an equation where a researcher is interested in the causal effect of X on Y while holding the causal effect of Y on X constant, or when the researcher knows the exact amount of time it takes for each causal effect to take place, i.e., the length of the causal lags. Instead of lagged effects, simultaneous feedback means estimating the simultaneous and perpetual impact of X and Y on each other. This requires a theory that causal effects are simultaneous in time, or so complex that they appear to behave simultaneously; a common example are the moods of roommates. To estimate simultaneous feedback models a theory of equilibrium is also necessary – that X and Y are in relatively steady states or are part of a system (society, market, classroom) that is in a relatively stable state. == See also == General linear model Seemingly unrelated regressions Reduced form Parameter identification problem == References == == Further reading == Asteriou, Dimitrios; Hall, Stephen G. (2011). Applied Econometrics (Second ed.). Basingstoke: Palgrave Macmillan. p. 395. ISBN 978-0-230-27182-1. Chow, Gregory C. (1983). Econometrics. New York: McGraw-Hill. pp. 117–121. ISBN 0-07-010847-1. Fomby, Thomas B.; Hill, R. Carter; Johnson, Stanley R. (1984). "Simultaneous Equations Models". Advanced Econometric Methods. New York: Springer. pp. 437–552. ISBN 0-387-90908-7. Maddala, G. S.; Lahiri, Kajal (2009). "Simultaneous Equations Models". Introduction to Econometrics (Fourth ed.). New York: Wiley. pp. 355–400. ISBN 978-0-470-01512-4. Ruud, Paul A. (2000). "Simultaneous Equations". An Introduction to Classical Econometric Theory. Oxford University Press. pp. 697–746. ISBN 0-19-511164-8. Sargan, Denis (1988). Lectures on Advanced Econometric Theory. Oxford: Basil Blackwell. pp. 68–89. ISBN 0-631-14956-2. Wooldridge, Jeffrey M. (2013). "Simultaneous Equations Models". Introductory Econometrics (Fifth ed.). South-Western. pp. 554–582. ISBN 978-1-111-53104-1. == External links == Lecture on the Identification Problem in 2SLS, and Estimation on YouTube by Mark Thoma
Wikipedia/Simultaneous_equations_model
Clinical study design is the formulation of clinical trials and other experiments, as well as observational studies, in medical research involving human beings and involving clinical aspects, including epidemiology . It is the design of experiments as applied to these fields. The goal of a clinical study is to assess the safety, efficacy, and / or the mechanism of action of an investigational medicinal product (IMP) or procedure, or new drug or device that is in development, but potentially not yet approved by a health authority (e.g. Food and Drug Administration). It can also be to investigate a drug, device or procedure that has already been approved but is still in need of further investigation, typically with respect to long-term effects or cost-effectiveness. Some of the considerations here are shared under the more general topic of design of experiments but there can be others, in particular related to patient confidentiality and medical ethics. == Outline of types of designs for clinical studies == === Treatment studies === Randomized controlled trial Blind trial Non-blind trial Adaptive clinical trial Platform Trials Nonrandomized trial (quasi-experiment) Interrupted time series design (measures on a sample or a series of samples from the same population are obtained several times before and after a manipulated event or a naturally occurring event) - considered a type of quasi-experiment === Observational studies === 1. Descriptive Case report Case series Population study 2. Analytical Cohort study Prospective cohort Retrospective cohort Time series study Case-control study Nested case-control study Cross-sectional study Community survey (a type of cross-sectional study) Ecological study == Important considerations == When choosing a study design, many factors must be taken into account. Different types of studies are subject to different types of bias. For example, recall bias is likely to occur in cross-sectional or case-control studies where subjects are asked to recall exposure to risk factors. Subjects with the relevant condition (e.g. breast cancer) may be more likely to recall the relevant exposures that they had undergone (e.g. hormone replacement therapy) than subjects who don't have the condition. The ecological fallacy may occur when conclusions about individuals are drawn from analyses conducted on grouped data. The nature of this type of analysis tends to overestimate the degree of association between variables. === Seasonal studies === Conducting studies in seasonal indications (such as allergies, Seasonal Affective Disorder, influenza, and others) can complicate a trial as patients must be enrolled quickly. Additionally, seasonal variations and weather patterns can affect a seasonal study. == Other terms == The term retrospective study is sometimes used as another term for a case-control study. This use of the term "retrospective study" is misleading, however, and should be avoided because other research designs besides case-control studies are also retrospective in orientation. Superiority trials are designed to demonstrate that one treatment is more effective than a given reference treatment. This type of study design is often used to test the effectiveness of a treatment compared to placebo or to the currently best available treatment. Non-inferiority trials are designed to demonstrate that a treatment is at least not appreciably less effective than a given reference treatment. This type of study design is often employed when comparing a new treatment to an established medical standard of care, in situations where the new treatment is cheaper, safer or more convenient than the reference treatment and would therefore be preferable if not appreciably less effective. Equivalence trials are designed to demonstrate that two treatments are equally effective. When using "parallel groups", each patient receives one treatment; in a "crossover study", each patient receives several treatments but in different order. A longitudinal study assesses research subjects over two or more points in time; by contrast, a cross-sectional study assesses research subjects at only one point in time (so case-control, cohort, and randomized studies are not cross-sectional). == See also == == References == == External links == Some aspects of study design Tufts University web site Comparison of strength Description of study designs from the National Cancer Institute
Wikipedia/Clinical_study_design
Kidnap and ransom insurance or K&R insurance is designed to protect individuals and corporations operating in high-risk areas around the world. Locations most often named in policies include Mexico, Venezuela, Haiti, and Nigeria, certain other countries in Latin America, as well as some parts of the Russian Federation and Eastern Europe. Central Asia is also seeing increasing numbers of incidents, particularly in Afghanistan and Iraq. == Coverage == Losses typically reimbursed by K&R insurance include: Ransom monies – Money paid or lost due to kidnapping Transit/delivery – Loss due to destruction, disappearance, confiscation, or wrongful appropriation of ransom monies being delivered to a covered kidnapping or extortion Accidental death or dismemberment – Death or permanent physical disablement occurring during a kidnapping Judgements and legal liability – Cost resulting from any claim or suit brought by any insured person against the insured Additional expenses – Medical care, severe disruption of operations, potential damage to company brand, PR counsel, wage and salary replacement, relocation and job retraining, and other expenses related to a kidnapping incident. The policies also typically pay for the fees and expenses of crisis management consultants. These consultants provide advice to the insured on how to best respond to the incident. Even the most basic training for people traveling to dangerous places is not easily provided or is not obtained by small to mid-sized companies. === Intended audience === The policies may be written to cover high-profile companies, non-governmental organizations, C-Suite level executives or similar strategic individuals, or individuals who represent local or multinational organizations. Some policies include kidnap prevention training. === Underwriting considerations === The major factors insurance underwriters weigh when considering a kidnap and ransom policy include the country of residence for the insured, the type of industry of the insured, revenue of the insured, and the travel patterns of any employees who may be covered by the policy. == Problems == One of the known paradoxes of K&R policies is that those who have them are often not aware, as it can be provided by an employer hoping to protect the company's assets. It is believed that an employee with knowledge of his K&R policy might begin to act differently, or even collude in his own kidnap for fraudulent purposes. In 2010, criminal gangs were believed to make $500 million a year from kidnap and ransom payments. == See also == Sklavenkasse War risk insurance Travel insurance == References == == External links == Is K&R Coverage a Risky Business? – via Lloyd's of London Buying Protection from Terrorism – Human Resources magazine
Wikipedia/Kidnap_and_ransom_insurance
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis. It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters. The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Karl Pearson.[1] == Method == Suppose that the parameter θ {\displaystyle \theta } = ( θ 1 , θ 2 , … , θ k {\displaystyle \theta _{1},\theta _{2},\dots ,\theta _{k}} ) characterizes the distribution f W ( w ; θ ) {\displaystyle f_{W}(w;\theta )} of the random variable W {\displaystyle W} . Suppose the first k {\displaystyle k} moments of the true distribution (the "population moments") can be expressed as functions of the θ {\displaystyle \theta } s: μ 1 ≡ E ⁡ [ W ] = g 1 ( θ 1 , θ 2 , … , θ k ) , μ 2 ≡ E ⁡ [ W 2 ] = g 2 ( θ 1 , θ 2 , … , θ k ) , ⋮ μ k ≡ E ⁡ [ W k ] = g k ( θ 1 , θ 2 , … , θ k ) . {\displaystyle {\begin{aligned}\mu _{1}&\equiv \operatorname {E} [W]=g_{1}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\[4pt]\mu _{2}&\equiv \operatorname {E} [W^{2}]=g_{2}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\&\,\,\,\vdots \\\mu _{k}&\equiv \operatorname {E} [W^{k}]=g_{k}(\theta _{1},\theta _{2},\ldots ,\theta _{k}).\end{aligned}}} Suppose a sample of size n {\displaystyle n} is drawn, resulting in the values w 1 , … , w n {\displaystyle w_{1},\dots ,w_{n}} . For j = 1 , … , k {\displaystyle j=1,\dots ,k} , let μ ^ j = 1 n ∑ i = 1 n w i j {\displaystyle {\hat {\mu }}_{j}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}^{j}} be the j-th sample moment, an estimate of μ j {\displaystyle \mu _{j}} . The method of moments estimator for θ 1 , θ 2 , … , θ k {\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{k}} denoted by θ ^ 1 , θ ^ 2 , … , θ ^ k {\displaystyle {\hat {\theta }}_{1},{\hat {\theta }}_{2},\dots ,{\hat {\theta }}_{k}} is defined to be the solution (if one exists) to the equations:[2] μ ^ 1 = g 1 ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) , μ ^ 2 = g 2 ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) , ⋮ μ ^ k = g k ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) . {\displaystyle {\begin{aligned}{\hat {\mu }}_{1}&=g_{1}({\hat {\theta }}_{1},{\hat {\theta }}_{2},\ldots ,{\hat {\theta }}_{k}),\\[4pt]{\hat {\mu }}_{2}&=g_{2}({\hat {\theta }}_{1},{\hat {\theta }}_{2},\ldots ,{\hat {\theta }}_{k}),\\&\,\,\,\vdots \\{\hat {\mu }}_{k}&=g_{k}({\hat {\theta }}_{1},{\hat {\theta }}_{2},\ldots ,{\hat {\theta }}_{k}).\end{aligned}}} The method described here for single random variables generalizes in an obvious manner to multiple random variables leading to multiple choices for moments to be used. Different choices generally lead to different solutions. == Advantages and disadvantages == The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased. It is an alternative to the method of maximum likelihood. However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-moments estimates may be used as the first approximation to the solutions of the likelihood equations, and successive improved approximations may then be found by the Newton–Raphson method. In this way the method of moments can assist in finding maximum likelihood estimates. In some cases, infrequent with large samples but less infrequent with small samples, the estimates given by the method of moments are outside of the parameter space (as shown in the example below); it does not make sense to rely on them then. That problem never arises in the method of maximum likelihood[3] Also, estimates by the method of moments are not necessarily sufficient statistics, i.e., they sometimes fail to take into account all relevant information in the sample. When estimating other structural parameters (e.g., parameters of a utility function, instead of parameters of a known probability distribution), appropriate probability distributions may not be known, and moment-based estimates may be preferred to maximum likelihood estimation. == Alternative method of moments == The equations to be solved in the method of moments (MoM) are in general nonlinear and there are no generally applicable guarantees that tractable solutions exist. But there is an alternative approach to using sample moments to estimate data model parameters in terms of known dependence of model moments on these parameters, and this alternative requires the solution of only linear equations or, more generally, tensor equations. This alternative is referred to as the Bayesian-Like MoM (BL-MoM), and it differs from the classical MoM in that it uses optimally weighted sample moments. Considering that the MoM is typically motivated by a lack of sufficient knowledge about the data model to determine likelihood functions and associated a posteriori probabilities of unknown or random parameters, it is odd that there exists a type of MoM that is Bayesian-Like. But the particular meaning of Bayesian-Like leads to a problem formulation in which required knowledge of a posteriori probabilities is replaced with required knowledge of only the dependence of model moments on unknown model parameters, which is exactly the knowledge required by the traditional MoM [1],[2]. The BL-MoM also uses knowledge of a priori probabilities of the parameters to be estimated, when available, but otherwise uses uniform priors. The BL-MoM has been reported on in only the applied statistics literature in connection with parameter estimation and hypothesis testing using observations of stochastic processes for problems in Information and Communications Theory and, in particular, communications receiver design in the absence of knowledge of likelihood functions or associated a posteriori probabilities and references therein. In addition, the restatement of this receiver design approach for stochastic process models as an alternative to the classical MoM for any type of multivariate data is available in tutorial form at the university website. The applications in and references demonstrate some important characteristics of this alternative to the classical MoM, and a detailed list of relative advantages and disadvantages is given in, but the literature is missing direct comparisons in specific applications of the classical MoM and the BL-MoM. == Examples == An example application of the method of moments is to estimate polynomial probability density distributions. In this case, an approximating polynomial of order N {\displaystyle N} is defined on an interval [ a , b ] {\displaystyle [a,b]} . The method of moments then yields a system of equations, whose solution involves the inversion of a Hankel matrix. === Proving the central limit theorem === Let X 1 , X 2 , ⋯ {\displaystyle X_{1},X_{2},\cdots } be independent random variables with mean 0 and variance 1, then let S n := 1 n ∑ i = 1 n X i {\textstyle S_{n}:={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}X_{i}} . We can compute the moments of S n {\displaystyle S_{n}} as E ⁡ [ S n 0 ] = 1 , E ⁡ [ S n 1 ] = 0 , E ⁡ [ S n 2 ] = 1 , E ⁡ [ S n 3 ] = 0 , … {\displaystyle {\begin{aligned}\operatorname {E} \left[S_{n}^{0}\right]&=1,&\operatorname {E} \left[S_{n}^{1}\right]&=0,\\[0.5ex]\operatorname {E} \left[S_{n}^{2}\right]&=1,&\operatorname {E} \left[S_{n}^{3}\right]&=0,\dots \end{aligned}}} Explicit expansion shows that E ⁡ [ S n 2 k + 1 ] = 0 ; E ⁡ [ S n 2 k ] = ( n k ) ( 2 k ) ! 2 k n k = n ( n − 1 ) ⋯ ( n − k + 1 ) n k ( 2 k − 1 ) ! ! {\displaystyle {\begin{aligned}\operatorname {E} \left[S_{n}^{2k+1}\right]&=0;\\[1ex]\operatorname {E} \left[S_{n}^{2k}\right]&={\frac {{\binom {n}{k}}{\frac {(2k)!}{2^{k}}}}{n^{k}}}\\[0.6ex]&={\frac {n(n-1)\cdots (n-k+1)}{n^{k}}}(2k-1)!!\end{aligned}}} where the numerator is the number of ways to select k {\displaystyle k} distinct pairs of balls by picking one each from 2 k {\displaystyle 2k} buckets, each containing balls numbered from 1 {\displaystyle 1} to n {\displaystyle n} . At the n → ∞ {\displaystyle n\to \infty } limit, all moments converge to that of a standard normal distribution. More analysis then show that this convergence in moments imply a convergence in distribution. Essentially this argument was published by Chebyshev in 1887. === Uniform distribution === Consider the uniform distribution on the interval [ a , b ] {\displaystyle [a,b]} , U ( a , b ) {\displaystyle U(a,b)} . If W ∼ U ( a , b ) {\displaystyle W\sim U(a,b)} then we have μ 1 = E ⁡ [ W ] = 1 2 ( a + b ) μ 2 = E ⁡ [ W 2 ] = 1 3 ( a 2 + a b + b 2 ) {\displaystyle {\begin{aligned}\mu _{1}&=\operatorname {E} \left[W\right]&=&{\tfrac {1}{2}}(a+b)\\[1ex]\mu _{2}&=\operatorname {E} \left[W^{2}\right]&=&{\tfrac {1}{3}}\left(a^{2}+ab+b^{2}\right)\end{aligned}}} Solving these equations gives a ^ = μ 1 − 3 ( μ 2 − μ 1 2 ) b ^ = μ 1 + 3 ( μ 2 − μ 1 2 ) {\displaystyle {\begin{aligned}{\hat {a}}&=\mu _{1}-{\sqrt {3\left(\mu _{2}-\mu _{1}^{2}\right)}}\\{\hat {b}}&=\mu _{1}+{\sqrt {3\left(\mu _{2}-\mu _{1}^{2}\right)}}\end{aligned}}} Given a set of samples { w i } {\displaystyle \{w_{i}\}} we can use the sample moments μ ^ 1 {\displaystyle {\hat {\mu }}_{1}} and μ ^ 2 {\displaystyle {\hat {\mu }}_{2}} in these formulae in order to estimate a {\displaystyle a} and b {\displaystyle b} . Note, however, that this method can produce inconsistent results in some cases. For example, the set of samples { 0 , 0 , 0 , 0 , 1 } {\displaystyle \{0,0,0,0,1\}} results in the estimate a ^ = 1 5 ( 1 − 2 3 ) = − 0.4928 {\textstyle {\hat {a}}={\frac {1}{5}}\left(1-2{\sqrt {3}}\right)=-0.4928} , b ^ = 1 5 ( 1 + 2 3 ) = 0.8928 {\textstyle {\hat {b}}={\frac {1}{5}}\left(1+2{\sqrt {3}}\right)=0.8928} . Since b ^ < 1 {\displaystyle {\hat {b}}<1} it is impossible for the set { 0 , 0 , 0 , 0 , 1 } {\displaystyle \{0,0,0,0,1\}} to have been drawn from U ( a ^ , b ^ ) {\displaystyle U({\hat {a}},{\hat {b}})} in this case. == See also == Generalized method of moments Decoding methods == References ==
Wikipedia/Method_of_moments_(statistics)
A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (see also longitudinal study), or where measurements are made on clusters of related statistical units. Mixed models are often preferred over traditional analysis of variance regression models because they don't rely on the independent observations assumption. Further, they have their flexibility in dealing with missing values and uneven spacing of repeated measurements. The Mixed model analysis allows measurements to be explicitly modeled in a wider variety of correlation and variance-covariance avoiding biased estimations structures. This page will discuss mainly linear mixed-effects models rather than generalized linear mixed models or nonlinear mixed-effects models. == Qualitative Description == Linear mixed models (LMMs) are statistical models that incorporate fixed and random effects to accurately represent non-independent data structures. LMM is an alternative to analysis of variance. Often, ANOVA assumes the independence of observations within each group, however, this assumption may not hold in non-independent data, such as multilevel/hierarchical, longitudinal, or correlated datasets. Non-independent sets are ones in which the variability between outcomes is due to correlations within groups or between groups. Mixed models properly account for nest structures/hierarchical data structures where observations are influenced by their nested associations. For example, when studying education methods involving multiple schools, there are multiple levels of variables to consider. The individual level/lower level comprises individual students or teachers within the school. The observations obtained from this student/teacher is nested within their school. For example, Student A is a unit within the School A. The next higher level is the school. At the higher level, the school contains multiple individual students and teachers. The school level influences the observations obtained from the students and teachers. For Example, School A and School B are the higher levels each with its set of Student A and Student B respectively. This represents a hierarchical data scheme. A solution to modeling hierarchical data is using linear mixed models. LMMs allow us to understand the important effects between and within levels while incorporating the corrections for standard errors for non-independence embedded in the data structure. In experimental fields such as social psychology, psycholinguistics, cognitive psychology (and neuroscience), where studies often involve multiple grouping variables, failing to account for random effects can lead to inflated Type I error rates and unreliable conclusions. For instance, when analyzing data from experiments that involve both samples of participants and samples of stimuli (e.g., images, scenarios, etc.), ignoring variation in either of these grouping variables (e.g., by averaging over stimuli) can result in misleading conclusions. In such cases, researchers can instead treat both participant and stimulus as random effects with LMMs, and in doing so, can correctly account for the variation in their data across multiple grouping variables. Similarly, when analyzing data from comparative longitudinal surveys, failing to include random effects at all relevant levels—such as country and country-year—can significantly distort the results. === The Fixed Effect === Fixed effects encapsulate the tendencies/trends that are consistent at the levels of primary interest. These effects are considered fixed because they are non-random and assumed to be constant for the population being studied. For example, when studying education a fixed effect could represent overall school level effects that are consistent across all schools. While the hierarchy of the data set is typically obvious, the specific fixed effects that affect the average responses for all subjects must be specified. Some fixed effect coefficients are sufficient without corresponding random effects where as other fixed coefficients only represent an average where the individual units are random. These may be determined by incorporating random intercepts and slopes. In most situations, several related models are considered and the model that best represents a universal model is adopted. === The Random Effect, ε === A key component of the mixed model is the incorporation of random effects with the fixed effect. Fixed effects are often fitted to represent the underlying model. In Linear mixed models, the true regression of the population is linear, β. The fixed data is fitted at the highest level. Random effects introduce statistical variability at different levels of the data hierarchy. These account for the unmeasured sources of variance that affect certain groups in the data. For example, the differences between student 1 and student 2 in the same class, or the differences between class 1 and class 2 in the same school. == History and current status == Ronald Fisher introduced random effects models to study the correlations of trait values between relatives. In the 1950s, Charles Roy Henderson provided best linear unbiased estimates of fixed effects and best linear unbiased predictions of random effects. Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effects models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest. They are prominently used in research involving human and animal subjects in fields ranging from genetics to marketing, and have also been used in baseball and industrial statistics. The mixed linear model association has improved the prevention of false positive associations. Populations are deeply interconnected and the relatedness structure of population dynamics is extremely difficult to model without the use of mixed models. Linear mixed models may not, however, be the only solution. LMM's have a constant-residual variance assumption that is sometimes violated when accounting for deeply associated continuous and binary traits. == Definition == In matrix notation a linear mixed model can be represented as y = X β + Z u + ϵ {\displaystyle {\boldsymbol {y}}=X{\boldsymbol {\beta }}+Z{\boldsymbol {u}}+{\boldsymbol {\epsilon }}} where y {\displaystyle {\boldsymbol {y}}} is a known vector of observations, with mean E ( y ) = X β {\displaystyle E({\boldsymbol {y}})=X{\boldsymbol {\beta }}} ; β {\displaystyle {\boldsymbol {\beta }}} is an unknown vector of fixed effects; u {\displaystyle {\boldsymbol {u}}} is an unknown vector of random effects, with mean E ( u ) = 0 {\displaystyle E({\boldsymbol {u}})={\boldsymbol {0}}} and variance–covariance matrix var ⁡ ( u ) = G {\displaystyle \operatorname {var} ({\boldsymbol {u}})=G} ; ϵ {\displaystyle {\boldsymbol {\epsilon }}} is an unknown vector of random errors, with mean E ( ϵ ) = 0 {\displaystyle E({\boldsymbol {\epsilon }})={\boldsymbol {0}}} and variance var ⁡ ( ϵ ) = R {\displaystyle \operatorname {var} ({\boldsymbol {\epsilon }})=R} ; X {\displaystyle X} is the known design matrix for the fixed effects relating the observations y {\displaystyle {\boldsymbol {y}}} to β {\displaystyle {\boldsymbol {\beta }}} , respectively Z {\displaystyle Z} is the known design matrix for the random effects relating the observations y {\displaystyle {\boldsymbol {y}}} to u {\displaystyle {\boldsymbol {u}}} , respectively. For example, if each observation can belong to any zero or more of k categories then Z, which has one row per observation, can be chosen to have k columns, where a value of 1 for a matrix element of Z indicates that an observation is known to belong to a category and a value of 0 indicates that an observation is known to not belong to a category. The inferred value of u for a category is then a category-specific intercept. If Z has additional columns, where the non-zero values are instead the value of an independent variable for an observation, then the corresponding inferred value of u is a category-specific slope for that independent variable. The prior distribution for the category intercepts and slopes is described by the covariance matrix G. == Estimation == The joint density of y {\displaystyle {\boldsymbol {y}}} and u {\displaystyle {\boldsymbol {u}}} can be written as: f ( y , u ) = f ( y | u ) f ( u ) {\displaystyle f({\boldsymbol {y}},{\boldsymbol {u}})=f({\boldsymbol {y}}|{\boldsymbol {u}})\,f({\boldsymbol {u}})} . Assuming normality, u ∼ N ( 0 , G ) {\displaystyle {\boldsymbol {u}}\sim {\mathcal {N}}({\boldsymbol {0}},G)} , ϵ ∼ N ( 0 , R ) {\displaystyle {\boldsymbol {\epsilon }}\sim {\mathcal {N}}({\boldsymbol {0}},R)} and C o v ( u , ϵ ) = 0 {\displaystyle \mathrm {Cov} ({\boldsymbol {u}},{\boldsymbol {\epsilon }})={\boldsymbol {0}}} , and maximizing the joint density over β {\displaystyle {\boldsymbol {\beta }}} and u {\displaystyle {\boldsymbol {u}}} , gives Henderson's "mixed model equations" (MME) for linear mixed models: ( X ′ R − 1 X X ′ R − 1 Z Z ′ R − 1 X Z ′ R − 1 Z + G − 1 ) ( β ^ u ^ ) = ( X ′ R − 1 y Z ′ R − 1 y ) {\displaystyle {\begin{pmatrix}X'R^{-1}X&X'R^{-1}Z\\Z'R^{-1}X&Z'R^{-1}Z+G^{-1}\end{pmatrix}}{\begin{pmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {u}}}\end{pmatrix}}={\begin{pmatrix}X'R^{-1}{\boldsymbol {y}}\\Z'R^{-1}{\boldsymbol {y}}\end{pmatrix}}} where for example X′ is the matrix transpose of X and R−1 is the matrix inverse of R. The solutions to the MME, β ^ {\displaystyle \textstyle {\hat {\boldsymbol {\beta }}}} and u ^ {\displaystyle \textstyle {\hat {\boldsymbol {u}}}} are best linear unbiased estimates and predictors for β {\displaystyle {\boldsymbol {\beta }}} and u {\displaystyle {\boldsymbol {u}}} , respectively. This is a consequence of the Gauss–Markov theorem when the conditional variance of the outcome is not scalable to the identity matrix. When the conditional variance is known, then the inverse variance weighted least squares estimate is best linear unbiased estimates. However, the conditional variance is rarely, if ever, known. So it is desirable to jointly estimate the variance and weighted parameter estimates when solving MMEs. === Choice of random effects structure === One choice that analysts face with mixed models is which random effects (i.e., grouping variables, random intercepts, and random slopes) to include. One prominent recommendation in the context of confirmatory hypothesis testing is to adopt a "maximal" random effects structure, including all possible random effects justified by the experimental design, as a means to control Type I error rates. === Software === One method used to fit such mixed models is that of the expectation–maximization algorithm (EM) where the variance components are treated as unobserved nuisance parameters in the joint likelihood. Currently, this is the method implemented in statistical software such as Python (statsmodels package) and SAS (proc mixed), and as initial step only in R's nlme package lme(). The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal. There are several other methods to fit mixed models, including using a mixed effect model (MEM) initially, and then Newton-Raphson (used by R package nlme's lme()), penalized least squares to get a profiled log likelihood only depending on the (low-dimensional) variance-covariance parameters of u {\displaystyle {\boldsymbol {u}}} , i.e., its cov matrix G {\displaystyle {\boldsymbol {G}}} , and then modern direct optimization for that reduced objective function (used by R's lme4 package lmer() and the Julia package MixedModels.jl) and direct optimization of the likelihood (used by e.g. R's glmmTMB). Notably, while the canonical form proposed by Henderson is useful for theory, many popular software packages use a different formulation for numerical computation in order to take advantage of sparse matrix methods (e.g. lme4 and MixedModels.jl). In the context of Bayesian methods, the brms package provides a user-friendly interface for fitting mixed models in R using Stan, allowing for the incorporation of prior distributions and the estimation of posterior distributions. In python, Bambi provides a similarly streamlined approach for fitting mixed effects models using PyMC. == See also == Nonlinear mixed-effects model Fixed effects model Generalized linear mixed model Linear regression Mixed-design analysis of variance Multilevel model Random effects model Repeated measures design Empirical Bayes method == References == == Further reading == Gałecki, Andrzej; Burzykowski, Tomasz (2013). Linear Mixed-Effects Models Using R: A Step-by-Step Approach. New York: Springer. ISBN 978-1-4614-3900-4. Milliken, G. A.; Johnson, D. E. (1992). Analysis of Messy Data: Vol. I. Designed Experiments. New York: Chapman & Hall. West, B. T.; Welch, K. B.; Galecki, A. T. (2007). Linear Mixed Models: A Practical Guide Using Statistical Software. New York: Chapman & Hall/CRC.
Wikipedia/Mixed_model
A fraternal order is a voluntary membership group organised as an order, with an initiation ritual and traits alluding to religious, chivalric or pseudo-chivalric orders, guilds, or secret societies. Fraternal orders typically have secular purposes, serving as social clubs, cultural organizations and providing a form of social welfare through reciprocal aid or charitable work. Many friendly societies, benefit societies and mutual organisations take the form of a fraternal order. Fraternal societies are often divided geographically into units called lodges or provinces. They sometimes involve a system of awards, medals, decorations, styles, degrees, offices, orders, or other distinctions, often associated with regalia, insignia, initiation and other rituals, secret greetings, signs, passwords, oaths, and more or less elaborate symbolism, as in chivalric orders. == Examples == The Freemasons and Odd Fellows emerged in the eighteenth century in the United Kingdom and the United States. Other examples, which emerged later, include the Benevolent and Protective Order of Elks, the Fraternal Order of Eagles, E Clampus Vitus, the Independent Order of Rechabites, the Templars of Honor and Temperance, the Independent Order of Foresters, the Knights of Columbus, and the Loyal Order of Moose. Some may have ethnic or religious affiliations, such as Ancient Order of Hibernians or Order of Alhambra for Irish Catholics, or the Orange Order for Irish Protestants. Some orders have a clear political agenda, sometimes radical or militant - for example, the Nativist and anti-Catholic Order of the Star Spangled Banner and Order of United Americans, active in the 1840s US, or the Ku Klux Klan. Some are associated with professions, such as the Fraternal Order of Police, while yet others are focused on academic traditions. In the more social type, each lodge is generally responsible for its own affairs, but it is often affiliated to an order such as the Independent Order of Odd Fellows or the Independent Order of Foresters. There are typically reciprocal agreements between lodges within an order, so that if members move to other cities or countries, they can join a new lodge without an initiation period. The ceremonies are fairly uniform throughout an order. Occasionally, a lodge might change the order that it is affiliated to, two orders might merge, or a group of lodges will break away from an order and form a new one. For example, the Independent Order of Foresters was set up in 1874 when it separated from the Ancient Order of Foresters, also called Foresters Friendly Society, which itself was formed from the Royal Foresters Society in 1834. Consequently, the histories of some fraternal orders and friendly societies are difficult to follow. Often there are different, unrelated organisations with similar names. == See also == List of general fraternities List of social fraternities List of social sororities and women's fraternities == References ==
Wikipedia/Fraternal_order
Methods engineering is a subspecialty of industrial engineering and manufacturing engineering concerned with human integration in industrial production processes. == Overview == Alternatively it can be described as the design of the productive process in which a person is involved. The task of the Methods engineer is to decide where humans will be utilized in the process of converting raw materials to finished products and how workers can most effectively perform their assigned tasks. The terms operation analysis, work design and simplification, and methods engineering and corporate re-engineering are frequently used interchangeably. Lowering costs and increasing reliability and productivity are the objectives of methods engineering. Methods efficiency engineering focuses on lowering costs through productivity improvement. It investigates the output obtained from each unit of input and the speed of each machine and man. Methods quality engineering focuses on increasing quality and reliability. These objectives are met in a five step sequence as follows: Project selection, data acquisition and presentation, data analysis, development of an ideal method based on the data analysis and, finally, presentation and implementation of the method. == Methods engineering topics == === Project selection === Methods engineers typically work on projects involving new product design, products with a high cost of production to profit ratio, and products associated with having poor quality issues. Different methods of project selection include the Pareto analysis, fish diagrams, Gantt charts, PERT charts, and job/work site analysis guides. === Data acquisition and presentation === Data that needs to be collected are specification sheets for the product, design drawings, process plans, quantity and delivery requirements, and projections as to how the product will perform or has performed in the market. Process charts are used to describe proposed or existing way of doing work utilizing machines and men. The Gantt process chart can assist in the analysis of the man to machine interaction and it can aid in establishing the optimum number of workers and machines subject to the financial constraints of the operation. A flow diagram is frequently employed to represent the manufacturing process associated with the product. === Data analysis === Data analysis enables the methods engineer to make decisions about several things, including: purpose of the operation, part design characteristics, specifications and tolerances of parts, materials, manufacturing process design, setup and tooling, working conditions, material handling, plant layout, and workplace design. Knowing the specifics (who, what, when, where, why, and how) of product manufacturing assists in the development of an optimum manufacturing method. === Ideal method development === Equations of synchronous and random servicing as well as line balancing are used to determine the ideal worker to machine ratio for the process or product chosen. Synchronous servicing is defined as the process where a machine is assigned to more than one operator, and the assigned operators and machine are occupied during the whole operating cycle. Random servicing of a facility, as the name indicates, is defined as a servicing process with a random time of occurrence and need of servicing variables. Line balancing equations determine the ideal number of workers needed on a production line to enable it to work at capacity. === Presentation and methods implementation === The industrial process or operation can be optimized using a variety of available methods. Each method design has its advantages and disadvantages. The best overall method is chosen using selection criteria and concepts involving value engineering, cost-benefit analysis, crossover charts, and economic analysis. The outcome of the selection process is then presented to the company for implementation at the plant. This last step involves "selling the idea" to the company brass, a skill the methods engineer must develop in addition to the normal engineering qualifications. == See also == Work design Motion analysis == References ==
Wikipedia/Methods_engineering
A randomized controlled trial (or randomized control trial; RCT) is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments. Participants who enroll in RCTs differ from one another in known and unknown ways that can influence study outcomes, and yet cannot be directly controlled. By randomly allocating participants among compared treatments, an RCT enables statistical control over these influences. Provided it is designed well, conducted properly, and enrolls enough participants, an RCT may achieve sufficient control over these confounding factors to deliver a useful comparison of the treatments studied. == Definition and examples == An RCT in clinical research typically compares a proposed new treatment against an existing standard of care; these are then termed the 'experimental' and 'control' treatments, respectively. When no such generally accepted treatment is available, a placebo may be used in the control group so that participants are blinded, or not given information, about their treatment allocations. This blinding principle is ideally also extended as much as possible to other parties including researchers, technicians, data analysts, and evaluators. Effective blinding experimentally isolates the physiological effects of treatments from various psychological sources of bias. The randomness in the assignment of participants to treatments reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Blinding reduces other forms of experimenter and subject biases. A well-blinded RCT is considered the gold standard for clinical trials. Blinded RCTs are commonly used to test the efficacy of medical interventions and may additionally provide information about adverse effects, such as drug reactions. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health. The terms "RCT" and "randomized trial" are sometimes used synonymously, but the latter term omits mention of controls and can therefore describe studies that compare multiple treatment groups with each other in the absence of a control group. Similarly, the initialism is sometimes expanded as "randomized clinical trial" or "randomized comparative trial", leading to ambiguity in the scientific literature. Not all RCTs are randomized controlled trials (and some of them could never be, as in cases where controls would be impractical or unethical to use). The term randomized controlled clinical trial is an alternative term used in clinical research; however, RCTs are also employed in other research areas, including many of the social sciences. == History == The first reported clinical trial was conducted by James Lind in 1747 to identify a treatment for scurvy. The first blind experiment was conducted by the French Royal Commission on Animal Magnetism in 1784 to investigate the claims of mesmerism. An early essay advocating the blinding of researchers came from Claude Bernard in the latter half of the 19th century. Bernard recommended that the observer of an experiment should not have knowledge of the hypothesis being tested. This suggestion contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist. The first study recorded to have a blinded researcher was published in 1907 by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine. Randomized experiments first appeared in psychology, where they were introduced by Charles Sanders Peirce and Joseph Jastrow in the 1880s, and in education. The earliest experiments comparing treatment and control groups were published by Robert Woodworth and Edward Thorndike in 1901, and by John E. Coover and Frank Angell in 1907. In the early 20th century, randomized experiments appeared in agriculture, due to Jerzy Neyman and Ronald A. Fisher. Fisher's experimental research and his writings popularized randomized experiments. The first published Randomized Controlled Trial in medicine appeared in the 1948 paper entitled "Streptomycin treatment of pulmonary tuberculosis", which described a Medical Research Council investigation. One of the authors of that paper was Austin Bradford Hill, who is credited as having conceived the modern RCT. Trial design was further influenced by the large-scale ISIS trials on heart attack treatments that were conducted in the 1980s. By the late 20th century, RCTs were recognized as the standard method for "rational therapeutics" in medicine. As of 2004, more than 150,000 RCTs were in the Cochrane Library. To improve the reporting of RCTs in the medical literature, an international group of scientists and editors published Consolidated Standards of Reporting Trials (CONSORT) Statements in 1996, 2001 and 2010, and these have become widely accepted. Randomization is the process of assigning trial subjects to treatment or control groups using an element of chance to determine the assignments in order to reduce the bias. == Ethics == Although the principle of clinical equipoise ("genuine uncertainty within the expert medical community... about the preferred treatment") common to clinical trials has been applied to RCTs, the ethics of RCTs have special considerations. For one, it has been argued that equipoise itself is insufficient to justify RCTs. For another, "collective equipoise" can conflict with a lack of personal equipoise (e.g., a personal belief that an intervention is effective). Finally, Zelen's design, which has been used for some RCTs, randomizes subjects before they provide informed consent, which may be ethical for RCTs of screening and selected therapies, but is likely unethical "for most therapeutic trials." Although subjects almost always provide informed consent for their participation in an RCT, studies since 1982 have documented that RCT subjects may believe that they are certain to receive treatment that is best for them personally; that is, they do not understand the difference between research and treatment. Further research is necessary to determine the prevalence of and ways to address this "therapeutic misconception". The RCT method variations may also create cultural effects that have not been well understood. For example, patients with terminal illness may join trials in the hope of being cured, even when treatments are unlikely to be successful. === Trial registration === In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrolment after July 1, 2005, must be registered prior to consideration for publication in one of the 12 member journals of the committee. However, trial registration may still occur late or not at all. Medical journals have been slow in adapting policies requiring mandatory clinical trial registration as a prerequisite for publication. == Classifications == === By study design === One way to classify RCTs is by study design. From most to least common in the healthcare literature, the major categories of RCT study designs are: Parallel-group – each participant is randomly assigned to a group, and all the participants in the group receive (or do not receive) an intervention. Crossover – over time, each participant receives (or does not receive) an intervention in a random sequence. Cluster – pre-existing groups of participants (e.g., villages, schools) are randomly selected to receive (or not receive) an intervention. Factorial – each participant is randomly assigned to a group that receives a particular combination of interventions or non-interventions (e.g., group 1 receives vitamin X and vitamin Y, group 2 receives vitamin X and placebo Y, group 3 receives placebo X and vitamin Y, and group 4 receives placebo X and placebo Y). An analysis of the 616 RCTs indexed in PubMed during December 2006 found that 78% were parallel-group trials, 16% were crossover, 2% were split-body, 2% were cluster, and 2% were factorial. === By outcome of interest (efficacy vs. effectiveness) === RCTs can be classified as "explanatory" or "pragmatic." Explanatory RCTs test efficacy in a research setting with highly selected participants and under highly controlled conditions. In contrast, pragmatic RCTs (pRCTs) test effectiveness in everyday practice with relatively unselected participants and under flexible conditions; in this way, pragmatic RCTs can "inform decisions about practice." === By hypothesis (superiority vs. noninferiority vs. equivalence) === Another classification of RCTs categorizes them as "superiority trials", "noninferiority trials", and "equivalence trials", which differ in methodology and reporting. Most RCTs are superiority trials, in which one intervention is hypothesized to be superior to another in a statistically significant way. Some RCTs are noninferiority trials "to determine whether a new treatment is no worse than a reference treatment." Other RCTs are equivalence trials in which the hypothesis is that two interventions are indistinguishable from each other. == Randomization == The advantages of proper randomization in RCTs include: "It eliminates bias in treatment assignment," specifically selection bias and confounding. "It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors." "It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance." There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate an unpredictable sequence of allocations; this may be a simple random assignment of patients to any of the groups at equal probabilities, may be "restricted", or may be "adaptive." A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed prior to definitively allocating them to their respective groups. Non-random "systematic" methods of group assignment, such as alternating subjects between one group and the other, can cause "limitless contamination possibilities" and can cause a breach of allocation concealment. However empirical evidence that adequate randomization changes outcomes relative to inadequate randomization has been difficult to detect. === Procedures === The treatment allocation is the desired proportion of patients in each treatment arm. An ideal randomization procedure would achieve the following goals: Maximize statistical power, especially in subgroup analyses. Generally, equal group sizes maximize statistical power, however, unequal groups sizes may be more powerful for some analyses (e.g., multiple comparisons of placebo versus several doses using Dunnett's procedure ), and are sometimes desired for non-analytic reasons (e.g., patients may be more motivated to enroll if there is a higher chance of getting the test treatment, or regulatory agencies may require a minimum number of patients exposed to treatment). Minimize selection bias. This may occur if investigators can consciously or unconsciously preferentially enroll patients between treatment arms. A good randomization procedure will be unpredictable so that investigators cannot guess the next subject's group assignment based on prior treatment assignments. The risk of selection bias is highest when previous treatment assignments are known (as in unblinded studies) or can be guessed (perhaps if a drug has distinctive side effects). Minimize allocation bias (or confounding). This may occur when covariates that affect the outcome are not equally distributed between treatment groups, and the treatment effect is confounded with the effect of the covariates (i.e., an "accidental bias"). If the randomization procedure causes an imbalance in covariates related to the outcome across groups, estimates of effect may be biased if not adjusted for the covariates (which may be unmeasured and therefore impossible to adjust for). However, no single randomization procedure meets those goals in every circumstance, so researchers must select a procedure for a given study based on its advantages and disadvantages. ==== Simple ==== This is a commonly used and intuitive procedure, similar to "repeated fair coin-tossing." Also known as "complete" or "unrestricted" randomization, it is robust against both selection and accidental biases. However, its main drawback is the possibility of imbalanced group sizes in small RCTs. It is therefore recommended only for RCTs with over 200 subjects. ==== Restricted ==== To balance group sizes in smaller RCTs, some form of "restricted" randomization is recommended. The major types of restricted randomization used in RCTs are: Permuted-block randomization or blocked randomization: a "block size" and "allocation ratio" (number of subjects in one group versus the other group) are specified, and subjects are allocated randomly within each block. For example, a block size of 6 and an allocation ratio of 2:1 would lead to random assignment of 4 subjects to one group and 2 to the other. This type of randomization can be combined with "stratified randomization", for example by center in a multicenter trial, to "ensure good balance of participant characteristics in each group." A special case of permuted-block randomization is random allocation, in which the entire sample is treated as one block. The major disadvantage of permuted-block randomization is that even if the block sizes are large and randomly varied, the procedure can lead to selection bias. Another disadvantage is that "proper" analysis of data from permuted-block-randomized RCTs requires stratification by blocks. Adaptive biased-coin randomization methods (of which urn randomization is the most widely known type): In these relatively uncommon methods, the probability of being assigned to a group decreases if the group is overrepresented and increases if the group is underrepresented. The methods are thought to be less affected by selection bias than permuted-block randomization. ==== Adaptive ==== At least two types of "adaptive" randomization procedures have been used in RCTs, but much less frequently than simple or restricted randomization: Covariate-adaptive randomization, of which one type is minimization: The probability of being assigned to a group varies in order to minimize "covariate imbalance." Minimization is reported to have "supporters and detractors" because only the first subject's group assignment is truly chosen at random, the method does not necessarily eliminate bias on unknown factors. Response-adaptive randomization, also known as outcome-adaptive randomization: The probability of being assigned to a group increases if the responses of the prior patients in the group were favorable. Although arguments have been made that this approach is more ethical than other types of randomization when the probability that a treatment is effective or ineffective increases during the course of an RCT, ethicists have not yet studied the approach in detail. === Allocation concealment === "Allocation concealment" (defined as "the procedure for protecting the randomization process so that the treatment to be allocated is not known before the patient is entered into the study") is important in RCTs. In practice, clinical investigators in RCTs often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient. Such practices introduce selection bias and confounders (both of which should be minimized by randomization), possibly distorting the results of the study. Adequate allocation concealment should defeat patients and investigators from discovering treatment allocation once a study is underway and after the study has concluded. Treatment related side-effects or adverse events may be specific enough to reveal allocation to investigators or patients thereby introducing bias or influencing any subjective parameters collected by investigators or requested from subjects. Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. It is recommended that allocation concealment methods be included in an RCT's protocol, and that the allocation concealment methods should be reported in detail in a publication of an RCT's results; however, a 2005 study determined that most RCTs have unclear allocation concealment in their protocols, in their publications, or both. On the other hand, a 2008 study of 146 meta-analyses concluded that the results of RCTs with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective. === Sample size === The number of treatment units (subjects or groups of subjects) assigned to control and treatment groups, affects an RCT's reliability. If the effect of the treatment is small, the number of treatment units in either group may be insufficient for rejecting the null hypothesis in the respective statistical test. The failure to reject the null hypothesis would imply that the treatment shows no statistically significant effect on the treated in a given test. But as the sample size increases, the same RCT may be able to demonstrate a significant effect of the treatment, even if this effect is small. == Blinding == An RCT may be blinded, (also called "masked") by "procedures that prevent study participants, caregivers, or outcome assessors from knowing which intervention was received." Unlike allocation concealment, blinding is sometimes inappropriate or impossible to perform in an RCT; for example, if an RCT involves a treatment in which active participation of the patient is necessary (e.g., physical therapy), participants cannot be blinded to the intervention. Traditionally, blinded RCTs have been classified as "single-blind", "double-blind", or "triple-blind"; however, in 2001 and 2006 two studies showed that these terms have different meanings for different people. The 2010 CONSORT Statement specifies that authors and editors should not use the terms "single-blind", "double-blind", and "triple-blind"; instead, reports of blinded RCT should discuss "If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how." RCTs without blinding are referred to as "unblinded", "open", or (if the intervention is a medication) "open-label". In 2008 a study concluded that the results of unblinded RCTs tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective; for example, in an RCT of treatments for multiple sclerosis, unblinded neurologists (but not the blinded neurologists) felt that the treatments were beneficial. In pragmatic RCTs, although the participants and providers are often unblinded, it is "still desirable and often possible to blind the assessor or obtain an objective source of data for evaluation of outcomes." == Analysis of data == The types of statistical methods used in RCTs depend on the characteristics of the data and include: For dichotomous (binary) outcome data, logistic regression (e.g., to predict sustained virological response after receipt of peginterferon alfa-2a for hepatitis C) and other methods can be used. For continuous outcome data, analysis of covariance (e.g., for changes in blood lipid levels after receipt of atorvastatin after acute coronary syndrome) tests the effects of predictor variables. For time-to-event outcome data that may be censored, survival analysis (e.g., Kaplan–Meier estimators and Cox proportional hazards models for time to coronary heart disease after receipt of hormone replacement therapy in menopause) is appropriate. Regardless of the statistical methods used, important considerations in the analysis of RCT data include: Whether an RCT should be stopped early due to interim results. For example, RCTs may be stopped early if an intervention produces "larger than expected benefit or harm", or if "investigators find evidence of no important difference between experimental and control interventions." The extent to which the groups can be analyzed exactly as they existed upon randomization (i.e., whether a so-called "intention-to-treat analysis" is used). A "pure" intention-to-treat analysis is "possible only when complete outcome data are available" for all randomized subjects; when some outcome data are missing, options include analyzing only cases with known outcomes and using imputed data. Nevertheless, the more that analyses can include all participants in the groups to which they were randomized, the less bias that an RCT will be subject to. Whether subgroup analysis should be performed. These are "often discouraged" because multiple comparisons may produce false positive findings that cannot be confirmed by other studies. == Reporting of results == The CONSORT 2010 Statement is "an evidence-based, minimum set of recommendations for reporting RCTs." The CONSORT 2010 checklist contains 25 items (many with sub-items) focusing on "individually randomised, two group, parallel trials" which are the most common type of RCT. For other RCT study designs, "CONSORT extensions" have been published, some examples are: Consort 2010 Statement: Extension to Cluster Randomised Trials Consort 2010 Statement: Non-Pharmacologic Treatment Interventions "Reporting of surrogate endpoints in randomised controlled trial reports (CONSORT-Surrogate): extension checklist with explanation and elaboration" === Relative importance and observational studies === Two studies published in The New England Journal of Medicine in 2000 found that observational studies and RCTs overall produced similar results. The authors of the 2000 findings questioned the belief that "observational studies should not be used for defining evidence-based medical care" and that RCTs' results are "evidence of the highest grade." However, a 2001 study published in Journal of the American Medical Association concluded that "discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common" between observational studies and RCTs. According to a 2014 (updated in 2024) Cochrane review, there is little evidence for significant effect differences between observational studies and randomized controlled trials. To evaluate differences it is necessary to consider things other than design, such as heterogeneity, population, intervention or comparator. Two other lines of reasoning question RCTs' contribution to scientific knowledge beyond other types of studies: If study designs are ranked by their potential for new discoveries, then anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs. RCTs may be unnecessary for treatments that have dramatic and rapid effects relative to the expected stable or progressively worse natural course of the condition treated. One example is combination chemotherapy including cisplatin for metastatic testicular cancer, which increased the cure rate from 5% to 60% in a 1977 non-randomized study. === Interpretation of statistical results === Like all statistical methods, RCTs are subject to both type I ("false positive") and type II ("false negative") statistical errors. Regarding Type I errors, a typical RCT will use 0.05 (i.e., 1 in 20) as the probability that the RCT will falsely find two equally effective treatments significantly different. Regarding Type II errors, despite the publication of a 1978 paper noting that the sample sizes of many "negative" RCTs were too small to make definitive conclusions about the negative results, by 2005-2006 a sizeable proportion of RCTs still had inaccurate or incompletely reported sample size calculations. === Peer review === Peer review of results is an important part of the scientific method. Reviewers examine the study results for potential problems with design that could lead to unreliable results (for example by creating a systematic bias), evaluate the study in the context of related studies and other evidence, and evaluate whether the study can be reasonably considered to have proven its conclusions. To underscore the need for peer review and the danger of overgeneralizing conclusions, two Boston-area medical researchers performed a randomized controlled trial in which they randomly assigned either a parachute or an empty backpack to 23 volunteers who jumped from either a biplane or a helicopter. The study was able to accurately report that parachutes fail to reduce injury compared to empty backpacks. The key context that limited the general applicability of this conclusion was that the aircraft were parked on the ground, and participants had only jumped about two feet. == Advantages == RCTs are considered to be the most reliable form of scientific evidence in the hierarchy of evidence that influences healthcare policy and practice because RCTs reduce spurious causality and bias. Results of RCTs may be combined in systematic reviews which are increasingly being used in the conduct of evidence-based practice. Some examples of scientific organizations' considering RCTs or systematic reviews of RCTs to be the highest-quality evidence available are: As of 1998, the National Health and Medical Research Council of Australia designated "Level I" evidence as that "obtained from a systematic review of all relevant randomised controlled trials" and "Level II" evidence as that "obtained from at least one properly designed randomised controlled trial." Since at least 2001, in making clinical practice guideline recommendations the United States Preventive Services Task Force has considered both a study's design and its internal validity as indicators of its quality. It has recognized "evidence obtained from at least one properly randomized controlled trial" with good internal validity (i.e., a rating of "I-good") as the highest quality evidence available to it. The GRADE Working Group concluded in 2008 that "randomised trials without important limitations constitute high quality evidence." For issues involving "Therapy/Prevention, Aetiology/Harm", the Oxford Centre for Evidence-based Medicine as of 2011 defined "Level 1a" evidence as a systematic review of RCTs that are consistent with each other, and "Level 1b" evidence as an "individual RCT (with narrow Confidence Interval)." Notable RCTs with unexpected results that contributed to changes in clinical practice include: After Food and Drug Administration approval, the antiarrhythmic agents flecainide and encainide came to market in 1986 and 1987 respectively. The non-randomized studies concerning the drugs were characterized as "glowing", and their sales increased to a combined total of approximately 165,000 prescriptions per month in early 1989. In that year, however, a preliminary report of an RCT concluded that the two drugs increased mortality. Sales of the drugs then decreased. Prior to 2002, based on observational studies, it was routine for physicians to prescribe hormone replacement therapy for post-menopausal women to prevent myocardial infarction. In 2002 and 2004, however, published RCTs from the Women's Health Initiative claimed that women taking hormone replacement therapy with estrogen plus progestin had a higher rate of myocardial infarctions than women on a placebo, and that estrogen-only hormone replacement therapy caused no reduction in the incidence of coronary heart disease. Possible explanations for the discrepancy between the observational studies and the RCTs involved differences in methodology, in the hormone regimens used, and in the populations studied. The use of hormone replacement therapy decreased after publication of the RCTs. == Disadvantages == Many papers discuss the disadvantages of RCTs. Among the most frequently cited drawbacks are: === Time and costs === RCTs can be expensive; one study found 28 Phase III RCTs funded by the National Institute of Neurological Disorders and Stroke prior to 2000 with a total cost of US$335 million, for a mean cost of US$12 million per RCT. Nevertheless, the return on investment of RCTs may be high, in that the same study projected that the 28 RCTs produced a "net benefit to society at 10-years" of 46 times the cost of the trials program, based on evaluating a quality-adjusted life year as equal to the prevailing mean per capita gross domestic product. The conduct of an RCT takes several years until being published; thus, data is restricted from the medical community for long years and may be of less relevance at time of publication. It is costly to maintain RCTs for the years or decades that would be ideal for evaluating some interventions. Interventions to prevent events that occur only infrequently (e.g., sudden infant death syndrome) and uncommon adverse outcomes (e.g., a rare side effect of a drug) would require RCTs with extremely large sample sizes and may, therefore, best be assessed by observational studies. Due to the costs of running RCTs, these usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation; whereas the case report, for example, can detail many aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up). === Conflict of interest dangers === A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised." Some RCTs are fully or partly funded by the health care industry (e.g., the pharmaceutical industry) as opposed to government, nonprofit, or other sources. A systematic review published in 2003 found four 1986–2002 articles comparing industry-sponsored and nonindustry-sponsored RCTs, and in all the articles there was a correlation of industry sponsorship and positive study outcome. A 2004 study of 1999–2001 RCTs published in leading medical and surgical journals determined that industry-funded RCTs "are more likely to be associated with statistically significant pro-industry findings." These results have been mirrored in trials in surgery, where although industry funding did not affect the rate of trial discontinuation it was however associated with a lower odds of publication for completed trials. One possible reason for the pro-industry results in industry-funded published RCTs is publication bias. Other authors have cited the differing goals of academic and industry sponsored research as contributing to the difference. Commercial sponsors may be more focused on performing trials of drugs that have already shown promise in early stage trials, and on replicating previous positive results to fulfill regulatory requirements for drug approval. === Ethics === If a disruptive innovation in medical technology is developed, it may be difficult to test this ethically in an RCT if it becomes "obvious" that the control subjects have poorer outcomes—either due to other foregoing testing, or within the initial phase of the RCT itself. Ethically it may be necessary to abort the RCT prematurely, and getting ethics approval (and patient agreement) to withhold the innovation from the control group in future RCTs may not be feasible. Historical control trials (HCT) exploit the data of previous RCTs to reduce the sample size; however, these approaches are controversial in the scientific community and must be handled with care. == In social science == Due to the recent emergence of RCTs in social science, the use of RCTs in social sciences is a contested issue. Some writers from a medical or health background have argued that existing research in a range of social science disciplines lacks rigour, and should be improved by greater use of randomized control trials. === Transport science === Researchers in transport science argue that public spending on programmes such as school travel plans could not be justified unless their efficacy is demonstrated by randomized controlled trials. Graham-Rowe and colleagues reviewed 77 evaluations of transport interventions found in the literature, categorising them into 5 "quality levels". They concluded that most of the studies were of low quality and advocated the use of randomized controlled trials wherever possible in future transport research. Dr. Steve Melia took issue with these conclusions, arguing that claims about the advantages of RCTs, in establishing causality and avoiding bias, have been exaggerated. He proposed the following eight criteria for the use of RCTs in contexts where interventions must change human behaviour to be effective: The intervention: Has not been applied to all members of a unique group of people (e.g. the population of a whole country, all employees of a unique organisation etc.) Is applied in a context or setting similar to that which applies to the control group Can be isolated from other activities—and the purpose of the study is to assess this isolated effect Has a short timescale between its implementation and maturity of its effects And the causal mechanisms: Are either known to the researchers, or else all possible alternatives can be tested Do not involve significant feedback mechanisms between the intervention group and external environments Have a stable and predictable relationship to exogenous factors Would act in the same way if the control group and intervention group were reversed === Criminology === A 2005 review found 83 randomized experiments in criminology published in 1982–2004, compared with only 35 published in 1957–1981. The authors classified the studies they found into five categories: "policing", "prevention", "corrections", "court", and "community". Focusing only on offending behavior programs, Hollin (2008) argued that RCTs may be difficult to implement (e.g., if an RCT required "passing sentences that would randomly assign offenders to programmes") and therefore that experiments with quasi-experimental design are still necessary. === Education === RCTs have been used in evaluating a number of educational interventions. Between 1980 and 2016, over 1,000 reports of RCTs have been published. For example, a 2009 study randomized 260 elementary school teachers' classrooms to receive or not receive a program of behavioral screening, classroom intervention, and parent training, and then measured the behavioral and academic performance of their students. Another 2009 study randomized classrooms for 678 first-grade children to receive a classroom-centered intervention, a parent-centered intervention, or no intervention, and then followed their academic outcomes through age 19. == Criticism == A 2018 review of the 10 most cited randomised controlled trials noted poor distribution of background traits, difficulties with blinding, and discussed other assumptions and biases inherent in randomised controlled trials. These include the "unique time period assessment bias", the "background traits remain constant assumption", the "average treatment effects limitation", the "simple treatment at the individual level limitation", the "all preconditions are fully met assumption", the "quantitative variable limitation" and the "placebo only or conventional treatment only limitation". == See also == Drug development Hypothesis testing Impact evaluation Jadad scale Pipeline planning Patient and public involvement Observational study Blinded experiment Statistical inference Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials == References == == Further reading ==
Wikipedia/Randomized_controlled_trial
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations. In a nutshell, stochastic approximation algorithms deal with a function of the form f ( θ ) = E ξ ⁡ [ F ( θ , ξ ) ] {\textstyle f(\theta )=\operatorname {E} _{\xi }[F(\theta ,\xi )]} which is the expected value of a function depending on a random variable ξ {\textstyle \xi } . The goal is to recover properties of such a function f {\textstyle f} without evaluating it directly. Instead, stochastic approximation algorithms use random samples of F ( θ , ξ ) {\textstyle F(\theta ,\xi )} to efficiently approximate properties of f {\textstyle f} such as zeros or extrema. Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory. The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952. == Robbins–Monro algorithm == The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function M ( θ ) {\textstyle M(\theta )} , and a constant α {\textstyle \alpha } , such that the equation M ( θ ) = α {\textstyle M(\theta )=\alpha } has a unique root at θ ∗ . {\textstyle \theta ^{*}.} It is assumed that while we cannot directly observe the function M ( θ ) , {\textstyle M(\theta ),} we can instead obtain measurements of the random variable N ( θ ) {\textstyle N(\theta )} where E ⁡ [ N ( θ ) ] = M ( θ ) {\textstyle \operatorname {E} [N(\theta )]=M(\theta )} . The structure of the algorithm is to then generate iterates of the form: θ n + 1 = θ n − a n ( N ( θ n ) − α ) {\displaystyle \theta _{n+1}=\theta _{n}-a_{n}(N(\theta _{n})-\alpha )} Here, a 1 , a 2 , … {\displaystyle a_{1},a_{2},\dots } is a sequence of positive step sizes. Robbins and Monro proved, Theorem 2 that θ n {\displaystyle \theta _{n}} converges in L 2 {\displaystyle L^{2}} (and hence also in probability) to θ ∗ {\displaystyle \theta ^{*}} , and Blum later proved the convergence is actually with probability one, provided that: N ( θ ) {\textstyle N(\theta )} is uniformly bounded, M ( θ ) {\textstyle M(\theta )} is nondecreasing, M ′ ( θ ∗ ) {\textstyle M'(\theta ^{*})} exists and is positive, and The sequence a n {\textstyle a_{n}} satisfies the following requirements: ∑ n = 0 ∞ a n = ∞ and ∑ n = 0 ∞ a n 2 < ∞ {\displaystyle \qquad \sum _{n=0}^{\infty }a_{n}=\infty \quad {\mbox{ and }}\quad \sum _{n=0}^{\infty }a_{n}^{2}<\infty \quad } A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: a n = a / n {\textstyle a_{n}=a/n} , for a > 0 {\textstyle a>0} . Other series, such as a n = 1 n ln ⁡ n , 1 n ln ⁡ n ln ⁡ ln ⁡ n , … {\displaystyle a_{n}={\frac {1}{n\ln n}},{\frac {1}{n\ln n\ln \ln n}},\dots } are possible but in order to average out the noise in N ( θ ) {\textstyle N(\theta )} , the above condition must be met. === Example === Consider the problem of estimating the mean θ ∗ {\displaystyle \theta ^{*}} of a probability distribution from a stream of independent samples X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } . Let N ( θ ) := θ − X {\displaystyle N(\theta ):=\theta -X} , then the unique solution to E ⁡ [ N ( θ ) ] = 0 {\textstyle \operatorname {E} [N(\theta )]=0} is the desired mean θ ∗ {\displaystyle \theta ^{*}} . The RM algorithm gives us θ n + 1 = θ n − a n ( θ n − X n ) {\displaystyle \theta _{n+1}=\theta _{n}-a_{n}(\theta _{n}-X_{n})} This is equivalent to stochastic gradient descent with loss function L ( θ ) = 1 2 ‖ X − θ ‖ 2 {\displaystyle L(\theta )={\frac {1}{2}}\|X-\theta \|^{2}} . It is also equivalent to a weighted average: θ n + 1 = ( 1 − a n ) θ n + a n X n {\displaystyle \theta _{n+1}=(1-a_{n})\theta _{n}+a_{n}X_{n}} In general, if there exists some function L {\displaystyle L} such that ∇ L ( θ ) = N ( θ ) − α {\displaystyle \nabla L(\theta )=N(\theta )-\alpha } , then the Robbins–Monro algorithm is equivalent to stochastic gradient descent with loss function L ( θ ) {\displaystyle L(\theta )} . However, the RM algorithm does not require L {\displaystyle L} to exist in order to converge. === Complexity results === If f ( θ ) {\textstyle f(\theta )} is twice continuously differentiable, and strongly convex, and the minimizer of f ( θ ) {\textstyle f(\theta )} belongs to the interior of Θ {\textstyle \Theta } , then the Robbins–Monro algorithm will achieve the asymptotically optimal convergence rate, with respect to the objective function, being E ⁡ [ f ( θ n ) − f ∗ ] = O ( 1 / n ) {\textstyle \operatorname {E} [f(\theta _{n})-f^{*}]=O(1/n)} , where f ∗ {\textstyle f^{*}} is the minimal value of f ( θ ) {\textstyle f(\theta )} over θ ∈ Θ {\textstyle \theta \in \Theta } . Conversely, in the general convex case, where we lack both the assumption of smoothness and strong convexity, Nemirovski and Yudin have shown that the asymptotically optimal convergence rate, with respect to the objective function values, is O ( 1 / n ) {\textstyle O(1/{\sqrt {n}})} . They have also proven that this rate cannot be improved. === Subsequent developments and Polyak–Ruppert averaging === While the Robbins–Monro algorithm is theoretically able to achieve O ( 1 / n ) {\textstyle O(1/n)} under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning. Chung (1954) and Fabian (1968) showed that we would achieve optimal convergence rate O ( 1 / n ) {\textstyle O(1/{\sqrt {n}})} with a n = ▽ 2 f ( θ ∗ ) − 1 / n {\textstyle a_{n}=\bigtriangledown ^{2}f(\theta ^{*})^{-1}/n} (or a n = 1 ( n M ′ ( θ ∗ ) ) {\textstyle a_{n}={\frac {1}{(nM'(\theta ^{*}))}}} ). Lai and Robbins designed adaptive procedures to estimate M ′ ( θ ∗ ) {\textstyle M'(\theta ^{*})} such that θ n {\textstyle \theta _{n}} has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991) and Ruppert (1988) independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure: θ n + 1 − θ n = a n ( α − N ( θ n ) ) , θ ¯ n = 1 n ∑ i = 0 n − 1 θ i {\displaystyle \theta _{n+1}-\theta _{n}=a_{n}(\alpha -N(\theta _{n})),\qquad {\bar {\theta }}_{n}={\frac {1}{n}}\sum _{i=0}^{n-1}\theta _{i}} The convergence of θ ¯ n {\displaystyle {\bar {\theta }}_{n}} to the unique root θ ∗ {\displaystyle \theta ^{*}} relies on the condition that the step sequence { a n } {\displaystyle \{a_{n}\}} decreases sufficiently slowly. That is A1) a n → 0 , a n − a n + 1 a n = o ( a n ) {\displaystyle a_{n}\rightarrow 0,\qquad {\frac {a_{n}-a_{n+1}}{a_{n}}}=o(a_{n})} Therefore, the sequence a n = n − α {\textstyle a_{n}=n^{-\alpha }} with 0 < α < 1 {\textstyle 0<\alpha <1} satisfies this restriction, but α = 1 {\textstyle \alpha =1} does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate O ( 1 / n ) {\textstyle O(1/{\sqrt {n}})} yet with a more robust step size policy. Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate O ( 1 / n ) {\textstyle O(1/{\sqrt {n}})} . A more general result is given in Chapter 11 of Kushner and Yin by defining interpolated time t n = ∑ i = 0 n − 1 a i {\textstyle t_{n}=\sum _{i=0}^{n-1}a_{i}} , interpolated process θ n ( ⋅ ) {\textstyle \theta ^{n}(\cdot )} and interpolated normalized process U n ( ⋅ ) {\textstyle U^{n}(\cdot )} as θ n ( t ) = θ n + i , U n ( t ) = ( θ n + i − θ ∗ ) / a n + i for t ∈ [ t n + i − t n , t n + i + 1 − t n ) , i ≥ 0 {\displaystyle \theta ^{n}(t)=\theta _{n+i},\quad U^{n}(t)=(\theta _{n+i}-\theta ^{*})/{\sqrt {a_{n+i}}}\quad {\mbox{for}}\quad t\in [t_{n+i}-t_{n},t_{n+i+1}-t_{n}),i\geq 0} Let the iterate average be Θ n = a n t ∑ i = n n + t / a n − 1 θ i {\displaystyle \Theta _{n}={\frac {a_{n}}{t}}\sum _{i=n}^{n+t/a_{n}-1}\theta _{i}} and the associate normalized error to be U ^ n ( t ) = a n t ∑ i = n n + t / a n − 1 ( θ i − θ ∗ ) {\displaystyle {\hat {U}}^{n}(t)={\frac {\sqrt {a_{n}}}{t}}\sum _{i=n}^{n+t/a_{n}-1}(\theta _{i}-\theta ^{*})} . With assumption A1) and the following A2) A2) There is a Hurwitz matrix A {\textstyle A} and a symmetric and positive-definite matrix Σ {\textstyle \Sigma } such that { U n ( ⋅ ) } {\textstyle \{U^{n}(\cdot )\}} converges weakly to U ( ⋅ ) {\textstyle U(\cdot )} , where U ( ⋅ ) {\textstyle U(\cdot )} is the statisolution to d U = A U d t + Σ 1 / 2 d w {\displaystyle dU=AU\,dt+\Sigma ^{1/2}\,dw} where w ( ⋅ ) {\textstyle w(\cdot )} is a standard Wiener process. satisfied, and define V ¯ = ( A − 1 ) ′ Σ ( A ′ ) − 1 {\textstyle {\bar {V}}=(A^{-1})'\Sigma (A')^{-1}} . Then for each t {\textstyle t} , U ^ n ( t ) ⟶ D N ( 0 , V t ) , where V t = V ¯ / t + O ( 1 / t 2 ) . {\displaystyle {\hat {U}}^{n}(t){\stackrel {\mathcal {D}}{\longrightarrow }}{\mathcal {N}}(0,V_{t}),\quad {\text{where}}\quad V_{t}={\bar {V}}/t+O(1/t^{2}).} The success of the averaging idea is because of the time scale separation of the original sequence { θ n } {\textstyle \{\theta _{n}\}} and the averaged sequence { Θ n } {\textstyle \{\Theta _{n}\}} , with the time scale of the former one being faster. === Application in stochastic optimization === Suppose we want to solve the following stochastic optimization problem g ( θ ∗ ) = min θ ∈ Θ E ⁡ [ Q ( θ , X ) ] , {\displaystyle g(\theta ^{*})=\min _{\theta \in \Theta }\operatorname {E} [Q(\theta ,X)],} where g ( θ ) = E ⁡ [ Q ( θ , X ) ] {\textstyle g(\theta )=\operatorname {E} [Q(\theta ,X)]} is differentiable and convex, then this problem is equivalent to find the root θ ∗ {\displaystyle \theta ^{*}} of ∇ g ( θ ) = 0 {\displaystyle \nabla g(\theta )=0} . Here Q ( θ , X ) {\displaystyle Q(\theta ,X)} can be interpreted as some "observed" cost as a function of the chosen θ {\displaystyle \theta } and random effects X {\displaystyle X} . In practice, it might be hard to get an analytical form of ∇ g ( θ ) {\displaystyle \nabla g(\theta )} , Robbins–Monro method manages to generate a sequence ( θ n ) n ≥ 0 {\displaystyle (\theta _{n})_{n\geq 0}} to approximate θ ∗ {\displaystyle \theta ^{*}} if one can generate ( X n ) n ≥ 0 {\displaystyle (X_{n})_{n\geq 0}} , in which the conditional expectation of X n {\displaystyle X_{n}} given θ n {\displaystyle \theta _{n}} is exactly ∇ g ( θ n ) {\displaystyle \nabla g(\theta _{n})} , i.e. X n {\displaystyle X_{n}} is simulated from a conditional distribution defined by E ⁡ [ H ( θ , X ) | θ = θ n ] = ∇ g ( θ n ) . {\displaystyle \operatorname {E} [H(\theta ,X)|\theta =\theta _{n}]=\nabla g(\theta _{n}).} Here H ( θ , X ) {\displaystyle H(\theta ,X)} is an unbiased estimator of ∇ g ( θ ) {\displaystyle \nabla g(\theta )} . If X {\displaystyle X} depends on θ {\displaystyle \theta } , there is in general no natural way of generating a random outcome H ( θ , X ) {\displaystyle H(\theta ,X)} that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimator H ( θ , X ) {\displaystyle H(\theta ,X)} . If X {\displaystyle X} is viewed as some "fundamental" underlying random process that is generated independently of θ {\displaystyle \theta } , and under some regularization conditions for derivative-integral interchange operations so that E ⁡ [ ∂ ∂ θ Q ( θ , X ) ] = ∇ g ( θ ) {\displaystyle \operatorname {E} {\Big [}{\frac {\partial }{\partial \theta }}Q(\theta ,X){\Big ]}=\nabla g(\theta )} , then H ( θ , X ) = ∂ ∂ θ Q ( θ , X ) {\displaystyle H(\theta ,X)={\frac {\partial }{\partial \theta }}Q(\theta ,X)} gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in which H ( θ , X ) {\displaystyle H(\theta ,X)} has a conditional expectation close to ∇ g ( θ ) {\displaystyle \nabla g(\theta )} but not exactly equal to it. We then define a recursion analogously to Newton's Method in the deterministic algorithm: θ n + 1 = θ n − ε n H ( θ n , X n + 1 ) . {\displaystyle \theta _{n+1}=\theta _{n}-\varepsilon _{n}H(\theta _{n},X_{n+1}).} ==== Convergence of the algorithm ==== The following result gives sufficient conditions on θ n {\displaystyle \theta _{n}} for the algorithm to converge: C1) ε n ≥ 0 , ∀ n ≥ 0. {\displaystyle \varepsilon _{n}\geq 0,\forall \;n\geq 0.} C2) ∑ n = 0 ∞ ε n = ∞ {\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}=\infty } C3) ∑ n = 0 ∞ ε n 2 < ∞ {\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}^{2}<\infty } C4) | X n | ≤ B , for a fixed bound B . {\displaystyle |X_{n}|\leq B,{\text{ for a fixed bound }}B.} C5) g ( θ ) is strictly convex, i.e. {\displaystyle g(\theta ){\text{ is strictly convex, i.e.}}} inf δ ≤ | θ − θ ∗ | ≤ 1 / δ ⟨ θ − θ ∗ , ∇ g ( θ ) ⟩ > 0 , for every 0 < δ < 1. {\displaystyle \inf _{\delta \leq |\theta -\theta ^{*}|\leq 1/\delta }\langle \theta -\theta ^{*},\nabla g(\theta )\rangle >0,{\text{ for every }}0<\delta <1.} Then θ n {\displaystyle \theta _{n}} converges to θ ∗ {\displaystyle \theta ^{*}} almost surely. Here are some intuitive explanations about these conditions. Suppose H ( θ n , X n + 1 ) {\displaystyle H(\theta _{n},X_{n+1})} is a uniformly bounded random variables. If C2) is not satisfied, i.e. ∑ n = 0 ∞ ε n < ∞ {\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}<\infty } , then θ n − θ 0 = − ∑ i = 0 n − 1 ε i H ( θ i , X i + 1 ) {\displaystyle \theta _{n}-\theta _{0}=-\sum _{i=0}^{n-1}\varepsilon _{i}H(\theta _{i},X_{i+1})} is a bounded sequence, so the iteration cannot converge to θ ∗ {\displaystyle \theta ^{*}} if the initial guess θ 0 {\displaystyle \theta _{0}} is too far away from θ ∗ {\displaystyle \theta ^{*}} . As for C3) note that if θ n {\displaystyle \theta _{n}} converges to θ ∗ {\displaystyle \theta ^{*}} then θ n + 1 − θ n = − ε n H ( θ n , X n + 1 ) → 0 , as n → ∞ . {\displaystyle \theta _{n+1}-\theta _{n}=-\varepsilon _{n}H(\theta _{n},X_{n+1})\rightarrow 0,{\text{ as }}n\rightarrow \infty .} so we must have ε n ↓ 0 {\displaystyle \varepsilon _{n}\downarrow 0} ,and the condition C3) ensures it. A natural choice would be ε n = 1 / n {\displaystyle \varepsilon _{n}=1/n} . Condition C5) is a fairly stringent condition on the shape of g ( θ ) {\displaystyle g(\theta )} ; it gives the search direction of the algorithm. ==== Example (where the stochastic gradient method is appropriate) ==== Suppose Q ( θ , X ) = f ( θ ) + θ T X {\displaystyle Q(\theta ,X)=f(\theta )+\theta ^{T}X} , where f {\displaystyle f} is differentiable and X ∈ R p {\displaystyle X\in \mathbb {R} ^{p}} is a random variable independent of θ {\displaystyle \theta } . Then g ( θ ) = E ⁡ [ Q ( θ , X ) ] = f ( θ ) + θ T E ⁡ X {\displaystyle g(\theta )=\operatorname {E} [Q(\theta ,X)]=f(\theta )+\theta ^{T}\operatorname {E} X} depends on the mean of X {\displaystyle X} , and the stochastic gradient method would be appropriate in this problem. We can choose H ( θ , X ) = ∂ ∂ θ Q ( θ , X ) = ∂ ∂ θ f ( θ ) + X . {\displaystyle H(\theta ,X)={\frac {\partial }{\partial \theta }}Q(\theta ,X)={\frac {\partial }{\partial \theta }}f(\theta )+X.} == Kiefer–Wolfowitz algorithm == The Kiefer–Wolfowitz algorithm was introduced in 1952 by Jacob Wolfowitz and Jack Kiefer, and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function. Let M ( x ) {\displaystyle M(x)} be a function which has a maximum at the point θ {\displaystyle \theta } . It is assumed that M ( x ) {\displaystyle M(x)} is unknown; however, certain observations N ( x ) {\displaystyle N(x)} , where E ⁡ [ N ( x ) ] = M ( x ) {\displaystyle \operatorname {E} [N(x)]=M(x)} , can be made at any point x {\displaystyle x} . The structure of the algorithm follows a gradient-like method, with the iterates being generated as x n + 1 = x n + a n ⋅ ( N ( x n + c n ) − N ( x n − c n ) 2 c n ) {\displaystyle x_{n+1}=x_{n}+a_{n}\cdot \left({\frac {N(x_{n}+c_{n})-N(x_{n}-c_{n})}{2c_{n}}}\right)} where N ( x n + c n ) {\displaystyle N(x_{n}+c_{n})} and N ( x n − c n ) {\displaystyle N(x_{n}-c_{n})} are independent. At every step, the gradient of M ( x ) {\displaystyle M(x)} is approximated akin to a central difference method with h = 2 c n {\displaystyle h=2c_{n}} . So the sequence { c n } {\displaystyle \{c_{n}\}} specifies the sequence of finite difference widths used for the gradient approximation, while the sequence { a n } {\displaystyle \{a_{n}\}} specifies a sequence of positive step sizes taken along that direction. Kiefer and Wolfowitz proved that, if M ( x ) {\displaystyle M(x)} satisfied certain regularity conditions, then x n {\displaystyle x_{n}} will converge to θ {\displaystyle \theta } in probability as n → ∞ {\displaystyle n\to \infty } , and later Blum in 1954 showed x n {\displaystyle x_{n}} converges to θ {\displaystyle \theta } almost surely, provided that: Var ⁡ ( N ( x ) ) ≤ S < ∞ {\displaystyle \operatorname {Var} (N(x))\leq S<\infty } for all x {\displaystyle x} . The function M ( x ) {\displaystyle M(x)} has a unique point of maximum (minimum) and is strong concave (convex) The algorithm was first presented with the requirement that the function M ( ⋅ ) {\displaystyle M(\cdot )} maintains strong global convexity (concavity) over the entire feasible space. Given this condition is too restrictive to impose over the entire domain, Kiefer and Wolfowitz proposed that it is sufficient to impose the condition over a compact set C 0 ⊂ R d {\displaystyle C_{0}\subset \mathbb {R} ^{d}} which is known to include the optimal solution. The function M ( x ) {\displaystyle M(x)} satisfies the regularity conditions as follows: There exists β > 0 {\displaystyle \beta >0} and B > 0 {\displaystyle B>0} such that | x ′ − θ | + | x ″ − θ | < β ⟹ | M ( x ′ ) − M ( x ″ ) | < B | x ′ − x ″ | {\displaystyle |x'-\theta |+|x''-\theta |<\beta \quad \Longrightarrow \quad |M(x')-M(x'')|<B|x'-x''|} There exists ρ > 0 {\displaystyle \rho >0} and R > 0 {\displaystyle R>0} such that | x ′ − x ″ | < ρ ⟹ | M ( x ′ ) − M ( x ″ ) | < R {\displaystyle |x'-x''|<\rho \quad \Longrightarrow \quad |M(x')-M(x'')|<R} For every δ > 0 {\displaystyle \delta >0} , there exists some π ( δ ) > 0 {\displaystyle \pi (\delta )>0} such that | z − θ | > δ ⟹ inf δ / 2 > ε > 0 | M ( z + ε ) − M ( z − ε ) | ε > π ( δ ) {\displaystyle |z-\theta |>\delta \quad \Longrightarrow \quad \inf _{\delta /2>\varepsilon >0}{\frac {|M(z+\varepsilon )-M(z-\varepsilon )|}{\varepsilon }}>\pi (\delta )} The selected sequences { a n } {\displaystyle \{a_{n}\}} and { c n } {\displaystyle \{c_{n}\}} must be infinite sequences of positive numbers such that c n → 0 as n → ∞ {\displaystyle \quad c_{n}\rightarrow 0\quad {\text{as}}\quad n\to \infty } ∑ n = 0 ∞ a n = ∞ {\displaystyle \sum _{n=0}^{\infty }a_{n}=\infty } ∑ n = 0 ∞ a n c n < ∞ {\displaystyle \sum _{n=0}^{\infty }a_{n}c_{n}<\infty } ∑ n = 0 ∞ a n 2 c n − 2 < ∞ {\displaystyle \sum _{n=0}^{\infty }a_{n}^{2}c_{n}^{-2}<\infty } A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be a n = 1 / n {\displaystyle a_{n}=1/n} and c n = n − 1 / 3 {\displaystyle c_{n}=n^{-1/3}} . === Subsequent developments and important issues === The Kiefer Wolfowitz algorithm requires that for each gradient computation, at least d + 1 {\displaystyle d+1} different parameter values must be simulated for every iteration of the algorithm, where d {\displaystyle d} is the dimension of the search space. This means that when d {\displaystyle d} is large, the Kiefer–Wolfowitz algorithm will require substantial computational effort per iteration, leading to slow convergence. To address this problem, Spall proposed the use of simultaneous perturbations to estimate the gradient. This method would require only two simulations per iteration, regardless of the dimension d {\displaystyle d} . In the conditions required for convergence, the ability to specify a predetermined compact set that fulfills strong convexity (or concavity) and contains the unique solution can be difficult to find. With respect to real world applications, if the domain is quite large, these assumptions can be fairly restrictive and highly unrealistic. == Further developments == An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on. These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size a n {\displaystyle a_{n}} should not converge to zero but should be chosen so as to track the function., 2nd ed., chapter 3 C. Johan Masreliez and R. Douglas Martin were the first to apply stochastic approximation to robust estimation. The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem by Aryeh Dvoretzky published in 1956. == See also == Stochastic gradient descent Stochastic variance reduction == References ==
Wikipedia/Stochastic_approximation
In time series analysis, the Box–Jenkins method, named after the statisticians George Box and Gwilym Jenkins, applies autoregressive moving average (ARMA) or autoregressive integrated moving average (ARIMA) models to find the best fit of a time-series model to past values of a time series. == Modeling approach == The original model uses an iterative three-stage modeling approach: Model identification and model selection: making sure that the variables are stationary, identifying seasonality in the dependent series (seasonally differencing it if necessary), and using plots of the autocorrelation (ACF) and partial autocorrelation (PACF) functions of the dependent time series to decide which (if any) autoregressive or moving average component should be used in the model. Parameter estimation using computation algorithms to arrive at coefficients that best fit the selected ARIMA model. The most common methods use maximum likelihood estimation or non-linear least-squares estimation. Statistical model checking by testing whether the estimated model conforms to the specifications of a stationary univariate process. In particular, the residuals should be independent of each other and constant in mean and variance over time. (Plotting the mean and variance of residuals over time and performing a Ljung–Box test or plotting autocorrelation and partial autocorrelation of the residuals are helpful to identify misspecification.) If the estimation is inadequate, we have to return to step one and attempt to build a better model. The data they used were from a gas furnace. These data are well known as the Box and Jenkins gas furnace data for benchmarking predictive models. Commandeur & Koopman (2007, §10.4) argue that the Box–Jenkins approach is fundamentally problematic. The problem arises because in "the economic and social fields, real series are never stationary however much differencing is done". Thus the investigator has to face the question: how close to stationary is close enough? As the authors note, "This is a hard question to answer". The authors further argue that rather than using Box–Jenkins, it is better to use state space methods, as stationarity of the time series is then not required. == Box–Jenkins model identification == === Stationarity and seasonality === The first step in developing a Box–Jenkins model is to determine whether the time series is stationary and whether there is any significant seasonality that needs to be modelled. ==== Detecting stationarity ==== Stationarity can be assessed from a run sequence plot. The run sequence plot should show constant location and scale. It can also be detected from an autocorrelation plot. Specifically, non-stationarity is often indicated by an autocorrelation plot with very slow decay. One can also utilize a Dickey-Fuller test or Augmented Dickey-Fuller test. ==== Detecting seasonality ==== Seasonality (or periodicity) can usually be assessed from an autocorrelation plot, a seasonal subseries plot, or a spectral plot. ==== Differencing to achieve stationarity ==== Box and Jenkins recommend the differencing approach to achieve stationarity. However, fitting a curve and subtracting the fitted values from the original data can also be used in the context of Box–Jenkins models. ==== Seasonal differencing ==== At the model identification stage, the goal is to detect seasonality, if it exists, and to identify the order for the seasonal autoregressive and seasonal moving average terms. For many series, the period is known and a single seasonality term is sufficient. For example, for monthly data one would typically include either a seasonal AR 12 term or a seasonal MA 12 term. For Box–Jenkins models, one does not explicitly remove seasonality before fitting the model. Instead, one includes the order of the seasonal terms in the model specification to the ARIMA estimation software. However, it may be helpful to apply a seasonal difference to the data and regenerate the autocorrelation and partial autocorrelation plots. This may help in the model identification of the non-seasonal component of the model. In some cases, the seasonal differencing may remove most or all of the seasonality effect. === Identify p and q === Once stationarity and seasonality have been addressed, the next step is to identify the order (i.e. the p and q) of the autoregressive and moving average terms. Different authors have different approaches for identifying p and q. Brockwell and Davis (1991) state "our prime criterion for model selection [among ARMA(p,q) models] will be the AICc", i.e. the Akaike information criterion with correction. Other authors use the autocorrelation plot and the partial autocorrelation plot, described below. ==== Autocorrelation and partial autocorrelation plots ==== The sample autocorrelation plot and the sample partial autocorrelation plot are compared to the theoretical behavior of these plots when the order is known. Specifically, for an AR(1) process, the sample autocorrelation function should have an exponentially decreasing appearance. However, higher-order AR processes are often a mixture of exponentially decreasing and damped sinusoidal components. For higher-order autoregressive processes, the sample autocorrelation needs to be supplemented with a partial autocorrelation plot. The partial autocorrelation of an AR(p) process becomes zero at lag p + 1 and greater, so we examine the sample partial autocorrelation function to see if there is evidence of a departure from zero. This is usually determined by placing a 95% confidence interval on the sample partial autocorrelation plot (most software programs that generate sample autocorrelation plots also plot this confidence interval). If the software program does not generate the confidence band, it is approximately ± 2 / N {\displaystyle \pm 2/{\sqrt {N}}} , with N denoting the sample size. The autocorrelation function of a MA(q) process becomes zero at lag q + 1 and greater, so we examine the sample autocorrelation function to see where it essentially becomes zero. We do this by placing the 95% confidence interval for the sample autocorrelation function on the sample autocorrelation plot. Most software that can generate the autocorrelation plot can also generate this confidence interval. The sample partial autocorrelation function is generally not helpful for identifying the order of the moving average process. The following table summarizes how one can use the sample autocorrelation function for model identification. Hyndman & Athanasopoulos suggest the following: The data may follow an ARIMA(p,d,0) model if the ACF and PACF plots of the differenced data show the following patterns: the ACF is exponentially decaying or sinusoidal; there is a significant spike at lag p in PACF, but none beyond lag p. The data may follow an ARIMA(0,d,q) model if the ACF and PACF plots of the differenced data show the following patterns: the PACF is exponentially decaying or sinusoidal; there is a significant spike at lag q in ACF, but none beyond lag q. In practice, the sample autocorrelation and partial autocorrelation functions are random variables and do not give the same picture as the theoretical functions. This makes the model identification more difficult. In particular, mixed models can be particularly difficult to identify. Although experience is helpful, developing good models using these sample plots can involve much trial and error. == Box–Jenkins model estimation == Estimating the parameters for Box–Jenkins models involves numerically approximating the solutions of nonlinear equations. For this reason, it is common to use statistical software designed to handle to the approach – virtually all modern statistical packages feature this capability. The main approaches to fitting Box–Jenkins models are nonlinear least squares and maximum likelihood estimation. Maximum likelihood estimation is generally the preferred technique. The likelihood equations for the full Box–Jenkins model are complicated and are not included here. See (Brockwell and Davis, 1991) for the mathematical details. == Box–Jenkins model diagnostics == === Assumptions for a stable univariate process === Model diagnostics for Box–Jenkins models is similar to model validation for non-linear least squares fitting. That is, the error term At is assumed to follow the assumptions for a stationary univariate process. The residuals should be white noise (or independent when their distributions are normal) drawings from a fixed distribution with a constant mean and variance. If the Box–Jenkins model is a good model for the data, the residuals should satisfy these assumptions. If these assumptions are not satisfied, one needs to fit a more appropriate model. That is, go back to the model identification step and try to develop a better model. Hopefully the analysis of the residuals can provide some clues as to a more appropriate model. One way to assess if the residuals from the Box–Jenkins model follow the assumptions is to generate statistical graphics (including an autocorrelation plot) of the residuals. One could also look at the value of the Box–Ljung statistic. == References == == Further reading == Beveridge, S.; Oickle, C. (1994), "Comparison of Box–Jenkins and objective methods for determining the order of a non-seasonal ARMA model", Journal of Forecasting, 13 (5): 419–434, doi:10.1002/for.3980130502 Pankratz, Alan (1983), Forecasting with Univariate Box–Jenkins Models: Concepts and Cases, John Wiley & Sons == External links == A First Course on Time Series Analysis – an open source book on time series analysis with SAS (Chapter 7) Box–Jenkins models in the Engineering Statistics Handbook of NIST Box–Jenkins modelling by Rob J Hyndman The Box–Jenkins methodology for time series models by Theresa Hoang Diem Ngo This article incorporates public domain material from the National Institute of Standards and Technology
Wikipedia/Box–Jenkins_method
In statistics, the method of estimating equations is a way of specifying how the parameters of a statistical model should be estimated. This can be thought of as a generalisation of many classical methods—the method of moments, least squares, and maximum likelihood—as well as some recent methods like M-estimators. The basis of the method is to have, or to find, a set of simultaneous equations involving both the sample data and the unknown model parameters which are to be solved in order to define the estimates of the parameters. Various components of the equations are defined in terms of the set of observed data on which the estimates are to be based. Important examples of estimating equations are the likelihood equations. == Examples == Consider the problem of estimating the rate parameter, λ of the exponential distribution which has the probability density function: f ( x ; λ ) = { λ e − λ x , x ≥ 0 , 0 , x < 0. {\displaystyle f(x;\lambda )=\left\{{\begin{matrix}\lambda e^{-\lambda x},&\;x\geq 0,\\0,&\;x<0.\end{matrix}}\right.} Suppose that a sample of data is available from which either the sample mean, x ¯ {\displaystyle {\bar {x}}} , or the sample median, m, can be calculated. Then an estimating equation based on the mean is x ¯ = λ − 1 , {\displaystyle {\bar {x}}=\lambda ^{-1},} while the estimating equation based on the median is m = λ − 1 ln ⁡ 2. {\displaystyle m=\lambda ^{-1}\ln 2.} Each of these equations is derived by equating a sample value (sample statistic) to a theoretical (population) value. In each case the sample statistic is a consistent estimator of the population value, and this provides an intuitive justification for this type of approach to estimation. == See also == Generalized estimating equations Method of moments (statistics) Generalized method of moments Maximum likelihood Empirical likelihood == References == Godambe, V. P., ed. (1991). Estimating Functions. New York: Oxford University Press. ISBN 0-19-852228-2. Heyde, Christopher C. (1997). Quasi-Likelihood and Its Application: A General Approach to Optimal Parameter Estimation. New York: Springer-Verlag. ISBN 0-387-98225-6. McLeish, D. L.; Small, Christopher G. (1988). The Theory and Applications of Statistical Inference Functions. New York: Springer-Verlag. ISBN 0-387-96720-6. Small, Christopher G.; Wang, Jinfang (2003). Numerical Methods for Nonlinear Estimating Equations. New York: Oxford University Press. ISBN 0-19-850688-0.
Wikipedia/Estimating_equations
Clinical trials are prospective biomedical or behavioral research studies on human participants designed to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novel vaccines, drugs, dietary choices, dietary supplements, and medical devices) and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy. They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted. Depending on product type and development stage, investigators initially enroll volunteers or patients into small pilot studies, and subsequently conduct progressively larger scale comparative studies. Clinical trials can vary in size and cost, and they can involve a single research center or multiple centers, in one country or in multiple countries. Clinical study design aims to ensure the scientific validity and reproducibility of the results. Costs for clinical trials can range into the billions of dollars per approved drug, and the complete trial process to approval may require 7–15 years. The sponsor may be a governmental organization or a pharmaceutical, biotechnology or medical-device company. Certain functions necessary to the trial, such as monitoring and lab work, may be managed by an outsourced partner, such as a contract research organization or a central laboratory. Only 10 percent of all drugs started in human clinical trials become approved drugs. == Overview == === Trials of drugs === Some clinical trials involve healthy subjects with no pre-existing medical conditions. Other clinical trials pertain to people with specific health conditions who are willing to try an experimental treatment. Pilot experiments are conducted to gain insights for design of the clinical trial to follow. There are two goals to testing medical treatments: to learn whether they work well enough, called "efficacy", or "effectiveness"; and to learn whether they are safe enough, called "safety". Neither is an absolute criterion; both safety and efficacy are evaluated relative to how the treatment is intended to be used, what other treatments are available, and the severity of the disease or condition. The benefits must outweigh the risks.: 8  For example, many drugs to treat cancer have severe side effects that would not be acceptable for an over-the-counter pain medication, yet the cancer drugs have been approved since they are used under a physician's care and are used for a life-threatening condition. In the US the elderly constitute 14% of the population, while they consume over one-third of drugs. People over 55 (or a similar cutoff age) are often excluded from trials because their greater health issues and drug use complicate data interpretation, and because they have different physiological capacity than younger people. Children and people with unrelated medical conditions are also frequently excluded. Pregnant women are often excluded due to potential risks to the fetus. The sponsor designs the trial in coordination with a panel of expert clinical investigators, including what alternative or existing treatments to compare to the new drug and what type(s) of patients might benefit. If the sponsor cannot obtain enough test subjects at one location investigators at other locations are recruited to join the study. During the trial, investigators recruit subjects with the predetermined characteristics, administer the treatment(s) and collect data on the subjects' health for a defined time period. Data include measurements such as vital signs, concentration of the study drug in the blood or tissues, changes to symptoms, and whether improvement or worsening of the condition targeted by the study drug occurs. The researchers send the data to the trial sponsor, who then analyzes the pooled data using statistical tests. Examples of clinical trial goals include assessing the safety and relative effectiveness of a medication or device: On a specific kind of patient At varying dosages For a new indication Evaluation for improved efficacy in treating a condition as compared to the standard therapy for that condition Evaluation of the study drug or device relative to two or more already approved/common interventions for that condition While most clinical trials test one alternative to the novel intervention, some expand to three or four and may include a placebo. Except for small, single-location trials, the design and objectives are specified in a document called a clinical trial protocol. The protocol is the trial's "operating manual" and ensures all researchers perform the trial in the same way on similar subjects and that the data is comparable across all subjects. As a trial is designed to test hypotheses and rigorously monitor and assess outcomes, it can be seen as an application of the scientific method, specifically the experimental step. The most common clinical trials evaluate new pharmaceutical products, medical devices, biologics, diagnostic assays, psychological therapies, or other interventions. Clinical trials may be required before a national regulatory authority approves marketing of the innovation. === Trials of devices === Similarly to drugs, manufacturers of medical devices in the United States are required to conduct clinical trials for premarket approval. Device trials may compare a new device to an established therapy, or may compare similar devices to each other. An example of the former in the field of vascular surgery is the Open versus Endovascular Repair (OVER trial) for the treatment of abdominal aortic aneurysm, which compared the older open aortic repair technique to the newer endovascular aneurysm repair device. An example of the latter are clinical trials on mechanical devices used in the management of adult female urinary incontinence. === Trials of procedures === Similarly to drugs, medical or surgical procedures may be subjected to clinical trials, such as comparing different surgical approaches in treatment of fibroids for subfertility. However, when clinical trials are unethical or logistically impossible in the surgical setting, case-controlled studies will be replaced. === Patient and public involvement === Besides being participants in a clinical trial, members of the public can be actively collaborate with researchers in designing and conducting clinical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and compliment their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for. == History == === Development === Although early medical experimentation was performed often, the use of a control group to provide an accurate comparison for the demonstration of the intervention's efficacy was generally lacking. For instance, Lady Mary Wortley Montagu, who campaigned for the introduction of inoculation (then called variolation) to prevent smallpox, arranged for seven prisoners who had been sentenced to death to undergo variolation in exchange for their life. Although they survived and did not contract smallpox, there was no control group to assess whether this result was due to the inoculation or some other factor. Similar experiments performed by Edward Jenner over his smallpox vaccine were equally conceptually flawed. The first proper clinical trial was conducted by the Scottish physician James Lind. The disease scurvy, now known to be caused by a Vitamin C deficiency, would often have terrible effects on the welfare of the crew of long-distance ocean voyages. In 1740, the catastrophic result of Anson's circumnavigation attracted much attention in Europe; out of 1900 men, 1400 had died, most of them allegedly from having contracted scurvy. John Woodall, an English military surgeon of the British East India Company, had recommended the consumption of citrus fruit from the 17th century, but their use did not become widespread. Lind conducted the first systematic clinical trial in 1747. He included a dietary supplement of an acidic quality in the experiment after two months at sea, when the ship was already afflicted with scurvy. He divided twelve scorbutic sailors into six groups of two. They all received the same diet but, in addition, group one was given a quart of cider daily, group two twenty-five drops of elixir of vitriol (sulfuric acid), group three six spoonfuls of vinegar, group four half a pint of seawater, group five received two oranges and one lemon, and the last group a spicy paste plus a drink of barley water. The treatment of group five stopped after six days when they ran out of fruit, but by then one sailor was fit for duty while the other had almost recovered. Apart from that, only group one also showed some effect of its treatment. Each year, May 20 is celebrated as Clinical Trials Day in honor of Lind's research. After 1750 the discipline began to take its modern shape. The English doctor John Haygarth demonstrated the importance of a control group for the correct identification of the placebo effect in his celebrated study of the ineffective remedy called Perkin's tractors. Further work in that direction was carried out by the eminent physician Sir William Gull, 1st Baronet in the 1860s. Frederick Akbar Mahomed (d. 1884), who worked at Guy's Hospital in London, made substantial contributions to the process of clinical trials, where "he separated chronic nephritis with secondary hypertension from what we now term essential hypertension. He also founded the Collective Investigation Record for the British Medical Association; this organization collected data from physicians practicing outside the hospital setting and was the precursor of modern collaborative clinical trials." === Modern trials === Ideas of Sir Ronald A. Fisher still play a role in clinical trials. While working for the Rothamsted experimental station in the field of agriculture, Fisher developed his Principles of experimental design in the 1920s as an accurate methodology for the proper design of experiments. Among his major ideas include the importance of randomization—the random assignment of individual elements (eg crops or patients) to different groups for the experiment; replication—to reduce uncertainty, measurements should be repeated and experiments replicated to identify sources of variation; blocking—to arrange experimental units into groups of units that are similar to each other, and thus reducing irrelevant sources of variation; use of factorial experiments—efficient at evaluating the effects and possible interactions of several independent factors. Of these, blocking and factorial design are seldom applied in clinical trials, because the experimental units are human subjects and there is typically only one independent intervention: the treatment. The British Medical Research Council officially recognized the importance of clinical trials from the 1930s. The council established the Therapeutic Trials Committee to advise and assist in the arrangement of properly controlled clinical trials on new products that seem likely on experimental grounds to have value in the treatment of disease. The first randomised curative trial was carried out at the MRC Tuberculosis Research Unit by Sir Geoffrey Marshall (1887–1982). The trial, carried out between 1946 and 1947, aimed to test the efficacy of the chemical streptomycin for curing pulmonary tuberculosis. The trial was both double-blind and placebo-controlled. The methodology of clinical trials was further developed by Sir Austin Bradford Hill, who had been involved in the streptomycin trials. From the 1920s, Hill applied statistics to medicine, attending the lectures of renowned mathematician Karl Pearson, among others. He became famous for a landmark study carried out in collaboration with Richard Doll on the correlation between smoking and lung cancer. They carried out a case-control study in 1950, which compared lung cancer patients with matched control and also began a sustained long-term prospective study into the broader issue of smoking and health, which involved studying the smoking habits and health of more than 30,000 doctors over a period of several years. His certificate for election to the Royal Society called him "... the leader in the development in medicine of the precise experimental methods now used nationally and internationally in the evaluation of new therapeutic and prophylactic agents." International clinical trials day is celebrated on 20 May. The acronyms used in the titling of clinical trials are often contrived, and have been the subject of derision. == Types == Clinical trials are classified by the research objective created by the investigators. In an observational study, the investigators observe the subjects and measure their outcomes. The researchers do not actively manage the study. In an interventional study, the investigators give the research subjects an experimental drug, surgical procedure, use of a medical device, diagnostic or other intervention to compare the treated subjects with those receiving no treatment or the standard treatment. Then the researchers assess how the subjects' health changes. Trials are classified by their purpose. After approval for human research is granted to the trial sponsor, the U.S. Food and Drug Administration (FDA) organizes and monitors the results of trials according to type: Prevention trials look for ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include drugs, vitamins or other micronutrients, vaccines, or lifestyle changes. Screening trials test for ways to identify certain diseases or health conditions. Diagnostic trials are conducted to find better tests or procedures for diagnosing a particular disease or condition. Treatment trials test experimental drugs, new combinations of drugs, or new approaches to surgery or radiation therapy. Quality of life trials (supportive care trials) evaluate how to improve comfort and quality of care for people with a chronic illness. Genetic trials are conducted to assess the prediction accuracy of genetic disorders making a person more or less likely to develop a disease. Epidemiological trials have the goal of identifying the general causes, patterns or control of diseases in large numbers of people. Compassionate use trials or expanded access trials provide partially tested, unapproved therapeutics to a small number of patients who have no other realistic options. Usually, this involves a disease for which no effective therapy has been approved, or a patient who has already failed all standard treatments and whose health is too compromised to qualify for participation in randomized clinical trials. Usually, case-by-case approval must be granted by both the FDA and the pharmaceutical company for such exceptions. Fixed trials consider existing data only during the trial's design, do not modify the trial after it begins, and do not assess the results until the study is completed. Adaptive clinical trials use existing data to design the trial, and then use interim results to modify the trial as it proceeds. Modifications include dosage, sample size, drug undergoing trial, patient selection criteria and "cocktail" mix. Adaptive trials often employ a Bayesian experimental design to assess the trial's progress. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. The aim is to more quickly identify drugs that have a therapeutic effect and to zero in on patient populations for whom the drug is appropriate. Clinical trials are conducted typically in four phases, with each phase using different numbers of subjects and having a different purpose to construct focus on identifying a specific effect. === Phases === Clinical trials involving new drugs are commonly classified into five phases. Each phase of the drug approval process is treated as a separate clinical trial. The drug development process will normally proceed through phases I–IV over many years, frequently involving a decade or longer. If the drug successfully passes through phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are performed after the newly approved drug, diagnostic or device is marketed, providing assessment about risks, benefits, or best uses. == Trial design == A fundamental distinction in evidence-based practice is between observational studies and randomized controlled trials. Types of observational studies in epidemiology, such as the cohort study and the case-control study, provide less compelling evidence than the randomized controlled trial. In observational studies, the investigators retrospectively assess associations between the treatments given to participants and their health status, with potential for considerable errors in design and interpretation. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health. Some Phase II and most Phase III drug trials are designed as randomized, double-blind, and placebo-controlled. Randomized: Each study subject is randomly assigned to receive either the study treatment or a placebo. Blind: The subjects involved in the study do not know which study treatment they receive. If the study is double-blind, the researchers also do not know which treatment a subject receives. This intent is to prevent researchers from treating the two groups differently. A form of double-blind study called a "double-dummy" design allows additional insurance against bias. In this kind of study, all patients are given both placebo and active doses in alternating periods. Placebo-controlled: The use of a placebo (fake treatment) allows the researchers to isolate the effect of the study treatment from the placebo effect. Clinical studies having small numbers of subjects may be "sponsored" by single researchers or a small group of researchers, and are designed to test simple questions or feasibility to expand the research for a more comprehensive randomized controlled trial. Clinical studies can be "sponsored" (financed and organized) by academic institutions, pharmaceutical companies, government entities and even private groups. Trials are conducted for new drugs, biotechnology, diagnostic assays or medical devices to determine their safety and efficacy prior to being submitted for regulatory review that would determine market approval. === Active control studies === In cases where giving a placebo to a person suffering from a disease may be unethical, "active comparator" (also known as "active control") trials may be conducted instead. In trials with an active control group, subjects are given either the experimental treatment or a previously approved treatment with known effectiveness. In other cases, sponsors may conduct an active comparator trial to establish an efficacy claim relative to the active comparator instead of the placebo in labeling. === Master protocol === A master protocol includes multiple substudies, which may have different objectives and involve coordinated efforts to evaluate one or more medical products in one or more diseases or conditions within the overall study structure. Trials that could develop a master protocol include the umbrella trial (multiple medical products for a single disease), platform trial (multiple products for a single disease entering and leaving the platform), and basket trial (one medical product for multiple diseases or disease subtypes). Genetic testing enables researchers to group patients according to their genetic profile, deliver drugs based on that profile to that group and compare the results. Multiple companies can participate, each bringing a different drug. The first such approach targets squamous cell cancer, which includes varying genetic disruptions from patient to patient. Amgen, AstraZeneca and Pfizer are involved, the first time they have worked together in a late-stage trial. Patients whose genomic profiles do not match any of the trial drugs receive a drug designed to stimulate the immune system to attack cancer. === Clinical trial protocol === A clinical trial protocol is a document used to define and manage the trial. It is prepared by a panel of experts. All study investigators are expected to strictly observe the protocol. The protocol describes the scientific rationale, objective(s), design, methodology, statistical considerations and organization of the planned trial. Details of the trial are provided in documents referenced in the protocol, such as an investigator's brochure. The protocol contains a precise study plan to assure safety and health of the trial subjects and to provide an exact template for trial conduct by investigators. This allows data to be combined across all investigators/sites. The protocol also informs the study administrators (often a contract research organization). The format and content of clinical trial protocols sponsored by pharmaceutical, biotechnology or medical device companies in the United States, European Union, or Japan have been standardized to follow Good Clinical Practice guidance issued by the International Conference on Harmonisation (ICH). Regulatory authorities in Canada, China, South Korea, and the UK also follow ICH guidelines. Journals such as Trials, encourage investigators to publish their protocols. === Design features === ==== Informed consent ==== Clinical trials recruit study subjects to sign a document representing their "informed consent". The document includes details such as its purpose, duration, required procedures, risks, potential benefits, key contacts and institutional requirements. The participant then decides whether to sign the document. The document is not a contract, as the participant can withdraw at any time without penalty. Informed consent is a legal process in which a recruit is instructed about key facts before deciding whether to participate. Researchers explain the details of the study in terms the subject can understand. The information is presented in the subject's native language. Generally, children cannot autonomously provide informed consent, but depending on their age and other factors, may be required to provide informed assent. ==== Statistical power ==== In any clinical trial, the number of subjects, also called the sample size, has a large impact on the ability to reliably detect and measure the effects of the intervention. This ability is described as its "power", which must be calculated before initiating a study to figure out if the study is worth its costs. In general, a larger sample size increases the statistical power, also the cost. The statistical power estimates the ability of a trial to detect a difference of a particular size (or larger) between the treatment and control groups. For example, a trial of a lipid-lowering drug versus placebo with 100 patients in each group might have a power of 0.90 to detect a difference between placebo and trial groups receiving dosage of 10 mg/dL or more, but only 0.70 to detect a difference of 6 mg/dL. === Placebo groups === Merely giving a treatment can have nonspecific effects. These are controlled for by the inclusion of patients who receive only a placebo. Subjects are assigned randomly without informing them to which group they belonged. Many trials are doubled-blinded so that researchers do not know to which group a subject is assigned. Assigning a subject to a placebo group can pose an ethical problem if it violates his or her right to receive the best available treatment. The Declaration of Helsinki provides guidelines on this issue. === Duration === Clinical trials are only a small part of the research that goes into developing a new treatment. Potential drugs, for example, first have to be discovered, purified, characterized, and tested in labs (in cell and animal studies) before ever undergoing clinical trials. In all, about 1,000 potential drugs are tested before just one reaches the point of being tested in a clinical trial. For example, a new cancer drug has, on average, six years of research behind it before it even makes it to clinical trials. But the major holdup in making new cancer drugs available is the time it takes to complete clinical trials themselves. On average, about eight years pass from the time a cancer drug enters clinical trials until it receives approval from regulatory agencies for sale to the public. Drugs for other diseases have similar timelines. Some reasons a clinical trial might last several years: For chronic conditions such as cancer, it takes months, if not years, to see if a cancer treatment has an effect on a patient. For drugs that are not expected to have a strong effect (meaning a large number of patients must be recruited to observe 'any' effect), recruiting enough patients to test the drug's effectiveness (i.e., getting statistical power) can take several years. Only certain people who have the target disease condition are eligible to take part in each clinical trial. Researchers who treat these particular patients must participate in the trial. Then they must identify the desirable patients and obtain consent from them or their families to take part in the trial. A clinical trial might also include an extended post-study follow-up period from months to years for people who have participated in the trial, a so-called "extension phase", which aims to identify long-term impact of the treatment. The biggest barrier to completing studies is the shortage of people who take part. All drug and many device trials target a subset of the population, meaning not everyone can participate. Some drug trials require patients to have unusual combinations of disease characteristics. It is a challenge to find the appropriate patients and obtain their consent, especially when they may receive no direct benefit (because they are not paid, the study drug is not yet proven to work, or the patient may receive a placebo). In the case of cancer patients, fewer than 5% of adults with cancer will participate in drug trials. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), about 400 cancer medicines were being tested in clinical trials in 2005. Not all of these will prove to be useful, but those that are may be delayed in getting approved because the number of participants is so low. For clinical trials involving potential for seasonal influences (such as airborne allergies, seasonal affective disorder, influenza, and skin diseases), the study may be done during a limited part of the year (such as spring for pollen allergies), when the drug can be tested. Clinical trials that do not involve a new drug usually have a much shorter duration. (Exceptions are epidemiological studies, such as the Nurses' Health Study). == Administration == Clinical trials designed by a local investigator, and (in the US) federally funded clinical trials, are almost always administered by the researcher who designed the study and applied for the grant. Small-scale device studies may be administered by the sponsoring company. Clinical trials of new drugs are usually administered by a contract research organization (CRO) hired by the sponsoring company. The sponsor provides the drug and medical oversight. A CRO is contracted to perform all the administrative work on a clinical trial. For Phases II–IV the CRO recruits participating researchers, trains them, provides them with supplies, coordinates study administration and data collection, sets up meetings, monitors the sites for compliance with the clinical protocol, and ensures the sponsor receives data from every site. Specialist site management organizations can also be hired to coordinate with the CRO to ensure rapid IRB/IEC approval and faster site initiation and patient recruitment. Phase I clinical trials of new medicines are often conducted in a specialist clinical trial clinic, with dedicated pharmacologists, where the subjects can be observed by full-time staff. These clinics are often run by a CRO which specialises in these studies. At a participating site, one or more research assistants (often nurses) do most of the work in conducting the clinical trial. The research assistant's job can include some or all of the following: providing the local institutional review board (IRB) with the documentation necessary to obtain its permission to conduct the study, assisting with study start-up, identifying eligible patients, obtaining consent from them or their families, administering study treatment(s), collecting and statistically analyzing data, maintaining and updating data files during followup, and communicating with the IRB, as well as the sponsor and CRO. === Quality === In the context of a clinical trial, quality typically refers to the absence of errors which can impact decision making, both during the conduct of the trial and in use of the trial results. === Marketing === An Interactional Justice Model may be used to test the effects of willingness to talk with a doctor about clinical trial enrollment. Results found that potential clinical trial candidates were less likely to enroll in clinical trials if the patient is more willing to talk with their doctor. The reasoning behind this discovery may be patients are happy with their current care. Another reason for the negative relationship between perceived fairness and clinical trial enrollment is the lack of independence from the care provider. Results found that there is a positive relationship between a lack of willingness to talk with their doctor and clinical trial enrollment. Lack of willingness to talk about clinical trials with current care providers may be due to patients' independence from the doctor. Patients who are less likely to talk about clinical trials are more willing to use other sources of information to gain a better insight of alternative treatments. Clinical trial enrollment should be motivated to utilize websites and television advertising to inform the public about clinical trial enrollment. === Information technology === The last decade has seen a proliferation of information technology use in the planning and conduct of clinical trials. Clinical trial management systems are often used by research sponsors or CROs to help plan and manage the operational aspects of a clinical trial, particularly with respect to investigational sites. Advanced analytics for identifying researchers and research sites with expertise in a given area utilize public and private information about ongoing research. Web-based electronic data capture (EDC) and clinical data management systems are used in a majority of clinical trials to collect case report data from sites, manage its quality and prepare it for analysis. Interactive voice response systems are used by sites to register the enrollment of patients using a phone and to allocate patients to a particular treatment arm (although phones are being increasingly replaced with web-based (IWRS) tools which are sometimes part of the EDC system). While patient-reported outcome were often paper based in the past, measurements are increasingly being collected using web portals or hand-held ePRO (or eDiary) devices, sometimes wireless. Statistical software is used to analyze the collected data and prepare them for regulatory submission. Access to many of these applications are increasingly aggregated in web-based clinical trial portals. In 2011, the FDA approved a Phase I trial that used telemonitoring, also known as remote patient monitoring, to collect biometric data in patients' homes and transmit it electronically to the trial database. This technology provides many more data points and is far more convenient for patients, because they have fewer visits to trial sites. As noted below, decentralized clinical trials are those that do not require patients' physical presence at a site, and instead rely largely on digital health data collection, digital informed consent processes, and so on. == Analysis == A clinical trial produces data that could reveal quantitative differences between two or more interventions; statistical analyses are used to determine whether such differences are true, result from chance, or are the same as no treatment (placebo). Data from a clinical trial accumulate gradually over the trial duration, extending from months to years. Accordingly, results for participants recruited early in the study become available for analysis while subjects are still being assigned to treatment groups in the trial. Early analysis may allow the emerging evidence to assist decisions about whether to stop the study, or to reassign participants to the more successful segment of the trial. Investigators may also want to stop a trial when data analysis shows no treatment effect. == Ethical aspects == Clinical trials are closely supervised by appropriate regulatory authorities. All studies involving a medical or therapeutic intervention on patients must be approved by a supervising ethics committee before permission is granted to run the trial. The local ethics committee has discretion on how it will supervise noninterventional studies (observational studies or those using already collected data). In the US, this body is called the Institutional Review Board (IRB); in the EU, they are called Ethics committees. Most IRBs are located at the local investigator's hospital or institution, but some sponsors allow the use of a central (independent/for profit) IRB for investigators who work at smaller institutions. To be ethical, researchers must obtain the full and informed consent of participating human subjects. (One of the IRB's main functions is to ensure potential patients are adequately informed about the clinical trial.) If the patient is unable to consent for him/herself, researchers can seek consent from the patient's legally authorized representative. In addition, the clinical trial participants must be made aware that they can withdraw from the clinical trial at any time without any adverse action taken against them. In California, the state has prioritized the individuals who can serve as the legally authorized representative. In some US locations, the local IRB must certify researchers and their staff before they can conduct clinical trials. They must understand the federal patient privacy (HIPAA) law and good clinical practice. The International Conference of Harmonisation Guidelines for Good Clinical Practice is a set of standards used internationally for the conduct of clinical trials. The guidelines aim to ensure the "rights, safety and well being of trial subjects are protected". The notion of informed consent of participating human subjects exists in many countries but its precise definition may still vary. Informed consent is clearly a 'necessary' condition for ethical conduct but does not 'ensure' ethical conduct. In compassionate use trials the latter becomes a particularly difficult problem. The final objective is to serve the community of patients or future patients in a best-possible and most responsible way. See also Expanded access. However, it may be hard to turn this objective into a well-defined, quantified, objective function. In some cases this can be done, however, for instance, for questions of when to stop sequential treatments (see Odds algorithm), and then quantified methods may play an important role. Additional ethical concerns are present when conducting clinical trials on children (pediatrics), and in emergency or epidemic situations. Ethically balancing the rights of multiple stakeholders may be difficult. For example, when drug trials fail, the sponsors may have a duty to tell current and potential investors immediately, which means both the research staff and the enrolled participants may first hear about the end of a trial through public business news. === Conflicts of interest and unfavorable studies === In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research were not published, the Pharmaceutical Research and Manufacturers of America published new guidelines urging companies to report all findings and limit the financial involvement in drug companies by researchers. The US Congress signed into law a bill which requires Phase II and Phase III clinical trials to be registered by the sponsor on the clinicaltrials.gov website compiled by the National Institutes of Health. Drug researchers not directly employed by pharmaceutical companies often seek grants from manufacturers, and manufacturers often look to academic researchers to conduct studies within networks of universities and their hospitals, e.g., for translational cancer research. Similarly, competition for tenured academic positions, government grants and prestige create conflicts of interest among academic scientists. According to one study, approximately 75% of articles retracted for misconduct-related reasons have no declared industry financial support. Seeding trials are particularly controversial. In the United States, all clinical trials submitted to the FDA as part of a drug approval process are independently assessed by clinical experts within the Food and Drug Administration, including inspections of primary data collection at selected clinical trial sites. In 2001, the editors of 12 major journals issued a joint editorial, published in each journal, on the control over clinical trials exerted by sponsors, particularly targeting the use of contracts which allow sponsors to review the studies prior to publication and withhold publication. They strengthened editorial restrictions to counter the effect. The editorial noted that contract research organizations had, by 2000, received 60% of the grants from pharmaceutical companies in the US. Researchers may be restricted from contributing to the trial design, accessing the raw data, and interpreting the results. Despite explicit recommendations by stakeholders of measures to improve the standards of industry-sponsored medical research, in 2013, Tohen warned of the persistence of a gap in the credibility of conclusions arising from industry-funded clinical trials, and called for ensuring strict adherence to ethical standards in industrial collaborations with academia, in order to avoid further erosion of the public's trust. Issues referred for attention in this respect include potential observation bias, duration of the observation time for maintenance studies, the selection of the patient populations, factors that affect placebo response, and funding sources. === During public health crisis === Conducting clinical trials of vaccines during epidemics and pandemics is subject to ethical concerns. For diseases with high mortality rates like Ebola, assigning individuals to a placebo or control group can be viewed as a death sentence. In response to ethical concerns regarding clinical research during epidemics, the National Academy of Medicine authored a report identifying seven ethical and scientific considerations. These considerations are: === Pregnant women and children === Pregnant women and children are typically excluded from clinical trials as vulnerable populations, though the data to support excluding them is not robust. By excluding them from clinical trials, information about the safety and effectiveness of therapies for these populations is often lacking. During the early history of the HIV/AIDS epidemic, a scientist noted that by excluding these groups from potentially life-saving treatment, they were being "protected to death". Projects such as Research Ethics for Vaccines, Epidemics, and New Technologies (PREVENT) have advocated for the ethical inclusion of pregnant women in vaccine trials. Inclusion of children in clinical trials has additional moral considerations, as children lack decision-making autonomy. Trials in the past had been criticized for using hospitalized children or orphans; these ethical concerns effectively stopped future research. In efforts to maintain effective pediatric care, several European countries and the US have policies to entice or compel pharmaceutical companies to conduct pediatric trials. International guidance recommends ethical pediatric trials by limiting harm, considering varied risks, and taking into account the complexities of pediatric care. == Safety == Responsibility for the safety of the subjects in a clinical trial is shared between the sponsor, the local site investigators (if different from the sponsor), the various IRBs that supervise the study, and (in some cases, if the study involves a marketable drug or device), the regulatory agency for the country where the drug or device will be sold. A systematic concurrent safety review is frequently employed to assure research participant safety. The conduct and on-going review is designed to be proportional to the risk of the trial. Typically this role is filled by a Data and Safety Committee, an externally appointed Medical Safety Monitor, an Independent Safety Officer, or for small or low-risk studies the principal investigator. For safety reasons, many clinical trials of drugs are designed to exclude women of childbearing age, pregnant women, or women who become pregnant during the study. In some cases, the male partners of these women are also excluded or required to take birth control measures. === Sponsor === Throughout the clinical trial, the sponsor is responsible for accurately informing the local site investigators of the true historical safety record of the drug, device or other medical treatments to be tested, and of any potential interactions of the study treatment(s) with already approved treatments. This allows the local investigators to make an informed judgment on whether to participate in the study or not. The sponsor is also responsible for monitoring the results of the study as they come in from the various sites as the trial proceeds. In larger clinical trials, a sponsor will use the services of a data monitoring committee (DMC, known in the US as a data safety monitoring board). This independent group of clinicians and statisticians meets periodically to review the unblinded data the sponsor has received so far. The DMC has the power to recommend termination of the study based on their review, for example if the study treatment is causing more deaths than the standard treatment, or seems to be causing unexpected and study-related serious adverse events. The sponsor is responsible for collecting adverse event reports from all site investigators in the study, and for informing all the investigators of the sponsor's judgment as to whether these adverse events were related or not related to the study treatment. The sponsor and the local site investigators are jointly responsible for writing a site-specific informed consent that accurately informs the potential subjects of the true risks and potential benefits of participating in the study, while at the same time presenting the material as briefly as possible and in ordinary language. FDA regulations state that participating in clinical trials is voluntary, with the subject having the right not to participate or to end participation at any time. === Local site investigators === The ethical principle of primum non-nocere ("first, do no harm") guides the trial, and if an investigator believes the study treatment may be harming subjects in the study, the investigator can stop participating at any time. On the other hand, investigators often have a financial interest in recruiting subjects, and could act unethically to obtain and maintain their participation. The local investigators are responsible for conducting the study according to the study protocol, and supervising the study staff throughout the duration of the study. The local investigator or his/her study staff are also responsible for ensuring the potential subjects in the study understand the risks and potential benefits of participating in the study. In other words, they (or their legally authorized representatives) must give truly informed consent. Local investigators are responsible for reviewing all adverse event reports sent by the sponsor. These adverse event reports contain the opinions of both the investigator (at the site where the adverse event occurred) and the sponsor, regarding the relationship of the adverse event to the study treatments. Local investigators also are responsible for making an independent judgment of these reports, and promptly informing the local IRB of all serious and study treatment-related adverse events. When a local investigator is the sponsor, there may not be formal adverse event reports, but study staff at all locations are responsible for informing the coordinating investigator of anything unexpected. The local investigator is responsible for being truthful to the local IRB in all communications relating to the study. === Institutional review boards (IRBs) === Approval by an Institutional Review Board (IRB), or Independent Ethics Committee (IEC), is necessary before all but the most informal research can begin. In commercial clinical trials, the study protocol is not approved by an IRB before the sponsor recruits sites to conduct the trial. However, the study protocol and procedures have been tailored to fit generic IRB submission requirements. In this case, and where there is no independent sponsor, each local site investigator submits the study protocol, the consent(s), the data collection forms, and supporting documentation to the local IRB. Universities and most hospitals have in-house IRBs. Other researchers (such as in walk-in clinics) use independent IRBs. The IRB scrutinizes the study both for medical safety and for protection of the patients involved in the study, before it allows the researcher to begin the study. It may require changes in study procedures or in the explanations given to the patient. A required yearly "continuing review" report from the investigator updates the IRB on the progress of the study and any new safety information related to the study. === Regulatory agencies === In the US, the FDA can audit the files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures. A 'covered clinical study' refers to a trial submitted to the FDA as part of a marketing application (for example, as part of an NDA or 510(k)), about which the FDA may require disclosure of financial interest of the clinical investigator in the outcome of the study. For example, the applicant must disclose whether an investigator owns equity in the sponsor, or owns proprietary interest in the product under investigation. The FDA defines a covered study as "... any study of a drug, biological product or device in humans submitted in a marketing application or reclassification petition that the applicant or FDA relies on to establish that the product is effective (including studies that show equivalence to an effective product) or any study in which a single investigator makes a significant contribution to the demonstration of safety." Alternatively, many American pharmaceutical companies have moved some clinical trials overseas. Benefits of conducting trials abroad include lower costs (in some countries) and the ability to run larger trials in shorter timeframes, whereas a potential disadvantage exists in lower-quality trial management. Different countries have different regulatory requirements and enforcement abilities. An estimated 40% of all clinical trials now take place in Asia, Eastern Europe, and Central and South America. "There is no compulsory registration system for clinical trials in these countries and many do not follow European directives in their operations", says Jacob Sijtsma of the Netherlands-based WEMOS, an advocacy health organisation tracking clinical trials in developing countries. Beginning in the 1980s, harmonization of clinical trial protocols was shown as feasible across countries of the European Union. At the same time, coordination between Europe, Japan and the United States led to a joint regulatory-industry initiative on international harmonization named after 1990 as the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Currently, most clinical trial programs follow ICH guidelines, aimed at "ensuring that good quality, safe and effective medicines are developed and registered in the most efficient and cost-effective manner. These activities are pursued in the interest of the consumer and public health, to prevent unnecessary duplication of clinical trials in humans and to minimize the use of animal testing without compromising the regulatory obligations of safety and effectiveness." === Aggregation of safety data during clinical development === Aggregating safety data across clinical trials during drug development is important because trials are generally designed to focus on determining how well the drug works. The safety data collected and aggregated across multiple trials as the drug is developed allows the sponsor, investigators and regulatory agencies to monitor the aggregate safety profile of experimental medicines as they are developed. The value of assessing aggregate safety data is: a) decisions based on aggregate safety assessment during development of the medicine can be made throughout the medicine's development and b) it sets up the sponsor and regulators well for assessing the medicine's safety after the drug is approved. == Economics == Clinical trial costs vary depending on trial phase, type of trial, and disease studied. A study of clinical trials conducted in the United States from 2004 to 2012 found the average cost of Phase I trials to be between $1.4 million and $6.6 million, depending on the type of disease. Phase II trials ranged from $7 million to $20 million, and Phase III trials from $11 million to $53 million. === Sponsor === The cost of a study depends on many factors, especially the number of sites conducting the study, the number of patients involved, and whether the study treatment is already approved for medical use. The expenses incurred by a pharmaceutical company in administering a Phase III or IV clinical trial may include, among others: production of the drug(s) or device(s) being evaluated staff salaries for the designers and administrators of the trial payments to the contract research organization, the site management organization (if used) and any outside consultants payments to local researchers and their staff for their time and effort in recruiting test subjects and collecting data for the sponsor the cost of study materials and the charges incurred to ship them communication with the local researchers, including on-site monitoring by the CRO before and (in some cases) multiple times during the study one or more investigator training meetings expense incurred by the local researchers, such as pharmacy fees, IRB fees and postage any payments to subjects enrolled in the trial the expense of treating a test subject who develops a medical condition caused by the study drug These expenses are incurred over several years. In the US, sponsors may receive a 50 percent tax credit for clinical trials conducted on drugs being developed for the treatment of orphan diseases. National health agencies, such as the US National Institutes of Health, offer grants to investigators who design clinical trials that attempt to answer research questions of interest to the agency. In these cases, the investigator who writes the grant and administers the study acts as the sponsor, and coordinates data collection from any other sites. These other sites may or may not be paid for participating in the study, depending on the amount of the grant and the amount of effort expected from them. Using internet resources can, in some cases, reduce the economic burden. === Investigators === Investigators are often compensated for their work in clinical trials. These amounts can be small, just covering a partial salary for research assistants and the cost of any supplies (usually the case with national health agency studies), or be substantial and include "overhead" that allows the investigator to pay the research staff during times between clinical trials. === Subjects === Participants in Phase I drug trials do not gain any direct health benefit from taking part. They are generally paid a fee for their time, with payments regulated and not related to any risk involved. Motivations of healthy volunteers is not limited to financial reward and may include other motivations such as contributing to science and others. In later phase trials, subjects may not be paid to ensure their motivation for participating with potential for a health benefit or contributing to medical knowledge. Small payments may be made for study-related expenses such as travel or as compensation for their time in providing follow-up information about their health after the trial treatment ends. == Participant recruitment and participation == Phase 0 and Phase I drug trials seek healthy volunteers. Most other clinical trials seek patients who have a specific disease or medical condition. The diversity observed in society should be reflected in clinical trials through the appropriate inclusion of ethnic minority populations. Patient recruitment or participant recruitment plays a significant role in the activities and responsibilities of sites conducting clinical trials. All volunteers being considered for a trial are required to undertake a medical screening. Requirements differ according to the trial needs, but typically volunteers would be screened in a medical laboratory for: Measurement of the electrical activity of the heart (ECG) Measurement of blood pressure, heart rate, and body temperature Blood sampling Urine sampling Weight and height measurement Drug abuse testing Pregnancy testing It has been observed that participants in clinical trials are disproportionately white. Often, minorities are not informed about clinical trials. One recent systematic review of the literature found that race/ethnicity as well as sex were not well-represented nor at times even tracked as participants in a large number of clinical trials of hearing loss management in adults. This may reduce the validity of findings in respect of non-white patients by not adequately representing the larger populations. === Locating trials === Depending on the kind of participants required, sponsors of clinical trials, or contract research organizations working on their behalf, try to find sites with qualified personnel as well as access to patients who could participate in the trial. Working with those sites, they may use various recruitment strategies, including patient databases, newspaper and radio advertisements, flyers, posters in places the patients might go (such as doctor's offices), and personal recruitment of patients by investigators. Volunteers with specific conditions or diseases have additional online resources to help them locate clinical trials. For example, the Fox Trial Finder connects Parkinson's disease trials around the world to volunteers who have a specific set of criteria such as location, age, and symptoms. Other disease-specific services exist for volunteers to find trials related to their condition. Volunteers may search directly on ClinicalTrials.gov to locate trials using a registry run by the U.S. National Institutes of Health and National Library of Medicine. There also is software that allows clinicians to find trial options for an individual patient based on data such as genomic data. === Research === The risk information seeking and processing (RISP) model analyzes social implications that affect attitudes and decision making pertaining to clinical trials. People who hold a higher stake or interest in the treatment provided in a clinical trial showed a greater likelihood of seeking information about clinical trials. Cancer patients reported more optimistic attitudes towards clinical trials than the general population. Having a more optimistic outlook on clinical trials also leads to greater likelihood of enrolling. === Matching === Matching involves a systematic comparison of a patient's clinical and demographic information against the eligibility criteria of various trials. Methods include: Manual: Healthcare providers or clinical trial coordinators manually review patient records and available trial criteria to identify potential matches. This might also include manually searching in clinical trial databases. Electronic health records (EHR). Some systems integrate with EHRs to automatically flag patients that may be eligible for trials based on their medical data. These systems may leverage machine learning, artificial intelligence or precision medicine methods to more effectively match patients to trials. These methods are faced with the challenge of overcoming the limitations of EHR records such as omissions and logging errors. Direct-to-patient services: Resources are specialized to support patients in finding clinical trials through online platforms, hotlines, and personalized support. == Decentralized trials == Although trials are commonly conducted at major medical centers, some participants are excluded due to the distance and expenses required for travel, leading to hardship, disadvantage, and inequity for participants, especially those in rural and underserved communities. Therefore, the concept of a "decentralized clinical trial" that minimizes or eliminates the need for patients to travel to sites, is now more widespread, a capability improved by telehealth and wearable technologies. == See also == Outcome measure Odds algorithm Preregistration (science) Marketing authorisation == References == == External links == The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, a guideline for regulation of clinical trials ClinicalTrials.gov, a worldwide database of registered clinical trials; US National Library of Medicine Cochrane Central Register of Controlled Trials (CENTRAL); a concentrated source for bibliographic reports of randomized controlled trials ClinicalTrials.eu, European Clinical Trials Information Network; Clinical Trials easily understood. The Hidden World of Clinical Trials: A Journey into Medical Innovation - A blog providing insights into medical innovation in clinical trials.
Wikipedia/Clinical_trial
In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm (i.e. non-randomized) clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA (A Practical Adaptive & Novel Designs and Analysis toolkit) provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting. == Purpose == The aim of an adaptive trial is to more quickly identify drugs or devices that have a therapeutic effect, and to zero in on patient populations for whom the drug is appropriate. When conducted efficiently, adaptive trials have the potential to find new treatments while minimizing the number of patients exposed to the risks of clinical trials. Specifically, adaptive trials can efficiently discover new treatments by reducing the number of patients enrolled in treatment groups that show minimal efficacy or higher adverse-event rates. Adaptive trials can adjust almost any part of its design, based on pre-set rules and statistical design, such as sample size, adding new groups, dropping less effective groups and changing the probability of being randomized to a particular group, for example. == History == In 2004, a Strategic Path Initiative was introduced by the United States Food and Drug Administration (FDA) to modify the way drugs travel from lab to market. This initiative aimed at dealing with the high attrition levels observed in the clinical phase. It also attempted to offer flexibility to investigators to find the optimal clinical benefit without affecting the study's validity. Adaptive clinical trials initially came under this regime. The FDA issued draft guidance on adaptive trial design in 2010. In 2012, the President's Council of Advisors on Science and Technology (PCAST) recommended that the FDA "run pilot projects to explore adaptive approval mechanisms to generate evidence across the lifecycle of a drug from the pre-market through the post-market phase." While not specifically related to clinical trials, the council also recommended that they "make full use of accelerated approval for all drugs meeting the statutory standard of addressing an unmet need for a serious or life-threatening disease, and demonstrating an impact on a clinical endpoint other than survival or irreversible morbidity, or on a surrogate endpoint, likely to predict clinical benefit." By 2019, the FDA updated their 2010 recommendations and issued "Adaptive Design Clinical Trials for Drugs and Biologics Guidance". In October of 2021, the FDA Center for Veterinary Medicine issued the Guidance Document "Adaptive and Other Innovative Designs for Effectiveness Studies of New Animal Drugs". == Characteristics == Traditionally, clinical trials are conducted in three steps: The trial is designed. The trial is conducted as prescribed by the design. Once the data are ready, they are analysed according to a pre-specified analysis plan. == Types == === Overview === Any trial design that can change its design, during active enrollment, could be considered an adaptive clinical trial. There are a number of different types, and real life trials may combine elements from these different trial types: In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. === Dose finding design === Phase I of clinical research focuses on selecting a particular dose of a drug to carry forward into future trials. Historically, such trials have had a "rules-based" (or "algorithm-based") design, such as the 3+3 design. However, these "A+B" rules-based designs are not appropriate for phase I studies and are inferior to adaptive, model-based designs. An example of a superior design is the continual reassessment method (CRM). === Group sequential design === Group sequential design is the application of sequential analysis to clinical trials. At each interim analysis, investigators will use the current data to decide whether the trial should either stop or should continue to recruit more participants. The trial might stop either because the evidence that the treatment is working is strong ("stopping for benefit") or weak ("stopping for futility"). Whether a trial may stop for futility only, benefit only, or either, is stated in advance. A design has "binding stopping rules" when the trial must stop when a particular threshold of (either strong or weak) evidence is crossed at a particular interim analysis. Otherwise it has "non-binding stopping rules", in which case other information can be taken into account, for example safety data. The number of interim analyses is specified in advance, and can be anything from a single interim analysis (a "two-stage" design") to an interim analysis after every participant ("continuous monitoring"). For trials with a binary (response/no response) outcome and a single treatment arm, a popular and simple group sequential design with two stages is the Simon design. In this design, there is a single interim analysis partway through the trial, at which point the trial either stops for futility or continues to the second stage. Mander and Thomson also proposed a design with a single interim analysis, at which point the trial could stop for either futility or benefit. For single-arm, single-stage binary outcome trials, a trial's success or failure is determined by the number of responses observed by the end of the trial. This means that it may be possible to know the conclusion of the trial (success or failure) with certainty before all the data are available. Planning to stop a trial once the conclusion is known with certainty is called non-stochastic curtailment. This reduces the sample size on average. Planning to stop a trial when the probability of success, based on the results so far, is either above or below a certain threshold is called stochastic curtailment. This reduces the average sample size even more than non-stochastic curtailment. Stochastic and non-stochastic curtailment can also be used in two-arm binary outcome trials, where a trial's success or failure is determined by the number of responses observed on each arm by the end of the trial. == Usage == The adaptive design method developed mainly in the early 21st century. In November 2019, the US Food and Drug Administration provided guidelines for using adaptive designs in clinical trials. === In 2020 COVID-19 related trials === In April 2020, the World Health Organization published an "R&D Blueprint (for the) novel Coronavirus" (Blueprint). The Blueprint documented a "large, international, multi-site, individually randomized controlled clinical trial" to allow "the concurrent evaluation of the benefits and risks of each promising candidate vaccine within 3–6 months of it being made available for the trial." The Blueprint listed a Global Target Product Profile (TPP) for COVID‑19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks. The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trial – the "Solidarity trial" for vaccines – to enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID‑19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition prioritized which vaccines would go into Phase II and III clinical trials, and determined harmonized Phase III protocols for all vaccines achieving the pivotal trial stage. The global "Solidarity" and European "Discovery" trials of hospitalized people with severe COVID‑19 infection applied adaptive design to rapidly alter trial parameters as results from the four experimental therapeutic strategies emerge. The US National Institute of Allergy and Infectious Diseases (NIAID) initiated an adaptive design, international Phase III trial (called "ACTT") to involve up to 800 hospitalized COVID‑19 people at 100 sites in multiple countries. === Breast cancer === An adaptive trial design enabled two experimental breast cancer drugs to deliver promising results after just six months of testing, far shorter than usual. Researchers assessed the results while the trial was in process and found that cancer had been eradicated in more than half of one group of patients. The trial, known as I-Spy 2, tested 12 experimental drugs. ==== I-SPY 1 ==== For its predecessor I-SPY 1, 10 cancer centers and the National Cancer Institute (NCI SPORE program and the NCI Cooperative groups) collaborated to identify response indicators that would best predict survival for women with high-risk breast cancer. During 2002–2006, the study monitored 237 patients undergoing neoadjuvant therapy before surgery. Iterative MRI and tissue samples monitored the biology of patients to chemotherapy given in a neoadjuvant setting, or presurgical setting. Evaluating chemotherapy's direct impact on tumor tissue took much less time than monitoring outcomes in thousands of patients over long time periods. The approach helped to standardize the imaging and tumor sampling processes, and led to miniaturized assays. Key findings included that tumor response was a good predictor of patient survival, and that tumor shrinkage during treatment was a good predictor of long-term outcome. Importantly, the vast majority of tumors identified as high risk by molecular signature. However, heterogeneity within this group of women and measuring response within tumor subtypes was more informative than viewing the group as a whole. Within genetic signatures, level of response to treatment appears to be a reasonable predictor of outcome. Additionally, its shared database has furthered the understanding of drug response and generated new targets and agents for subsequent testing. ==== I-SPY 2 ==== I-SPY 2 is an adaptive clinical trial of multiple Phase 2 treatment regimens combined with standard chemotherapy. I-SPY 2 linked 19 academic cancer centers, two community centers, the FDA, the NCI, pharmaceutical and biotech companies, patient advocates and philanthropic partners. The trial is sponsored by the Biomarker Consortium of the Foundation for the NIH (FNIH), and is co-managed by the FNIH and QuantumLeap Healthcare Collaborative. I-SPY 2 was designed to explore the hypothesis that different combinations of cancer therapies have varying degrees of success for different patients. Conventional clinical trials that evaluate post-surgical tumor response require a separate trial with long intervals and large populations to test each combination. Instead, I-SPY 2 is organized as a continuous process. It efficiently evaluates multiple therapy regimes by relying on the predictors developed in I-SPY 1 that help quickly determine whether patients with a particular genetic signature will respond to a given treatment regime. The trial is adaptive in that the investigators learn as they go, and do not continue treatments that appear to be ineffective. All patients are categorized based on tissue and imaging markers collected early and iteratively (a patient's markers may change over time) throughout the trial, so that early insights can guide treatments for later patients. Treatments that show positive effects for a patient group can be ushered to confirmatory clinical trials, while those that do not can be rapidly sidelined. Importantly, confirmatory trials can serve as a pathway for FDA Accelerated Approval. I-SPY 2 can simultaneously evaluate candidates developed by multiple companies, escalating or eliminating drugs based on immediate results. Using a single standard arm for comparison for all candidates in the trial saves significant costs over individual Phase 3 trials. All data are shared across the industry. As of January 2016 I-SPY 2 is comparing 11 new treatments against 'standard therapy', and is estimated to complete in Sept 2017. By mid 2016 several treatments had been selected for later stage trials. === Alzheimer's === Researchers under the EPAD project by the Innovative Medicines Initiative are utilizing an adaptive trial design to help speed development of Alzheimer's disease treatments, with a budget of 53 million euros. The first trial under the initiative was expected to begin in 2015 and to involve about a dozen companies. As of 2020, 2,000 people over the age of 50 have been recruited across Europe for a long term study on the earliest stages of Alzheimer's. The EPAD project plans to use the results from this study and other data to inform 1,500 person selected adaptive clinical trials of drugs to prevent Alzheimer's. == Bayesian designs == The adjustable nature of adaptive trials inherently suggests the use of Bayesian statistical analysis. Bayesian statistics inherently address updating information such as that seen in adaptive trials that change from updated information derived from interim analysis. The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. According to FDA guidelines, an adaptive Bayesian clinical trial can involve: Interim looks to stop or to adjust patient accrual Interim looks to assess stopping the trial early either for success, futility or harm Reversing the hypothesis of non-inferiority to superiority or vice versa Dropping arms or doses or adjusting doses Modification of the randomization rate to increase the probability that a patient is allocated to the most appropriate treatment (or arm in the multi-armed bandit model) The Bayesian framework Continuous Individualized Risk Index which is based on dynamic measurements from cancer patients can be effectively used for adaptive trial designs. Platform trials rely heavily on Bayesian designs. For regulatory submission of Bayesian clinical trial design, there exist two Bayesian decision rules that are frequently used by trial sponsors. First, posterior probability approach is mainly used in decision-making to quantify the evidence to address the question, "Does the current data provide convincing evidence in favor of the alternative hypothesis?" The key quantity of the posterior probability approach is the posterior probability of the alternative hypothesis being true based on the data observed up to the point of analysis. Second, predictive probability approach is mainly used in decision-making is to answer the question at an interim analysis: "Is the trial likely to present compelling evidence in favor of the alternative hypothesis if we gather additional data, potentially up to the maximum sample size (or current sample size)?" The key quantity of the predictive probability approach is the posterior predictive probability of the trial success given the interim data. In most regulatory submissions, Bayesian trial designs are calibrated to possess good frequentist properties. In this spirit, and in adherence to regulatory practice, regulatory agencies typically recommend that sponsors provide the frequentist type I and II error rates for the sponsor's proposed Bayesian analysis plan. In other words, the Bayesian designs for the regulatory submission need to satisfy the type I and II error requirement in most cases in the frequentist sense. Some exception may happen in the context of external data borrowing where the type I error rate requirement can be relaxed to some degree depending on the confidence of the historical information. == Statistical analysis == The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. == Added complexity == The logistics of managing traditional, non-adaptive design clinical trials may be complex. In adaptive design clinical trials, adapting the design as results arrive adds to the complexity of design, monitoring, drug supply, data capture and randomization. Furthermore, it should be stated in the trial's protocol exactly what kind of adaptation will be permitted. Publishing the trial protocol in advance increases the validity of the final results, as it makes clear that any adaptation that took place during the trial was planned, rather than ad hoc. According to PCAST "One approach is to focus studies on specific subsets of patients most likely to benefit, identified based on validated biomarkers. In some cases, using appropriate biomarkers can make it possible to dramatically decrease the sample size required to achieve statistical significance—for example, from 1500 to 50 patients." Adaptive designs have added statistical complexity compared to traditional clinical trial designs. For example, any multiple testing, either from looking at multiple treatment arms or from looking at a single treatment arm multiple times, must be accounted for. Another example is statistical bias, which can be more likely when using adaptive designs, and again must be accounted for. While an adaptive design may be an improvement over a non-adaptive design in some respects (for example, expected sample size), it is not always the case that an adaptive design is a better choice overall: in some cases, the added complexity of the adaptive design may not justify its benefits. An example of this is when the trial is based on a measurement that takes a long time to observe, as this would mean having an interim analysis when many participants have started treatment but cannot yet contribute to the interim results. == Risks == Shorter trials may not reveal longer term risks, such as a cancer's return. == Resources (external links) == "What are adaptive clinical trials?" (video). youtube.com. Medical Research Council Biostatistics Unit. 17 November 2022. Burnett, Thomas; Mozgunov, Pavel; Pallmann, Philip; Villar, Sofia S.; Wheeler, Graham M.; Jaki, Thomas (2020). "Adding flexibility to clinical trial designs: An example-based guide to the practical use of adaptive designs". BMC Medicine. 18 (1): 352. doi:10.1186/s12916-020-01808-2. PMC 7677786. PMID 33208155. Jennison, Christopher; Turnbull, Bruce (1999). Group Sequential Methods with Applications to Clinical Trials. Taylor & Francis. ISBN 0849303168. Wason, James M. S.; Brocklehurst, Peter; Yap, Christina (2019). "When to keep it simple – adaptive designs are not always useful". BMC Medicine. 17 (1): 152. doi:10.1186/s12916-019-1391-9. PMC 6676635. PMID 31370839. Wheeler, Graham M.; Mander, Adrian P.; Bedding, Alun; Brock, Kristian; Cornelius, Victoria; Grieve, Andrew P.; Jaki, Thomas; Love, Sharon B.; Odondi, Lang'o; Weir, Christopher J.; Yap, Christina; Bond, Simon J. (2019). "How to design a dose-finding study using the continual reassessment method". BMC Medical Research Methodology. 19 (1): 18. doi:10.1186/s12874-018-0638-z. PMC 6339349. PMID 30658575. Grayling, Michael John; Wheeler, Graham Mark (2020). "A review of available software for adaptive clinical trial design". Clinical Trials. 17 (3): 323–331. doi:10.1177/1740774520906398. PMC 7736777. PMID 32063024. S2CID 189762427. == See also == == References == == Sources == Kurtz, Esfahani, Scherer (July 2019). "Dynamic Risk Profiling Using Serial Tumor Biomarkers for Personalized Outcome Prediction". Cell. 178 (3): 699–713.e19. doi:10.1016/j.cell.2019.06.011. PMC 7380118. PMID 31280963.{{cite journal}}: CS1 maint: multiple names: authors list (link) President's Council of Advisors on Science and Technology (September 2012). "Report To The President on Propelling Innovation in Drug Discovery, Development and Evaluation" (PDF). Executive Office of the President. Archived (PDF) from the original on 21 January 2017. Retrieved 4 January 2014. Brennan, Zachary (5 June 2013). "CROs Slowly Shifting to Adaptive Clinical Trial Designs". Outsourcing-pharma.com. Retrieved 5 January 2014. Spiegelhalter, David (April 2010). "Bayesian methods in clinical trials: Has there been any progress?" (PDF). Archived from the original (PDF) on 6 January 2014. Carlin, Bradley P. (25 March 2009). "Bayesian Adaptive Methods for Clinical Trial Design and Analysis" (PDF). == External links == Gottlieb K. (2016) The FDA adaptive trial design guidance in a nutshell - A review in Q&A format for decision makers. PeerJ Preprints 4:e1825v1 [1] Coffey, C. S.; Kairalla, J. A. (2008). "Adaptive clinical trials: Progress and challenges". Drugs in R&D. 9 (4): 229–242. doi:10.2165/00126839-200809040-00003. PMID 18588354. S2CID 11861515. Center for Drug Evaluation and Research (CDER); Center for Biologics Evaluation and Research (CBER) (February 2010). "Adaptive Design Clinical Trials for Drugs and Biologics" (PDF). Food and Drug Administration. Archived from the original (PDF) on 5 January 2014. Yi, Cheng; Yu, Shen. "Bayesian Adaptive Designs for Clinical Trials" (PDF). M. D. Anderson. Berry, Scott M.; Carlin, Bradley P.; Lee, J. Jack; Muller, Peter (20 July 2010). Bayesian Adaptive Methods for Clinical Trials. CRC Press. ISBN 978-1-4398-2551-8. Berry on BAMCT on YouTube Press, W. H. (2009). "Bandit solutions provide unified ethical models for randomized clinical trials and comparative effectiveness research". Proceedings of the National Academy of Sciences. 106 (52): 22387–92. doi:10.1073/pnas.0912378106. PMC 2793317. PMID 20018711.
Wikipedia/Adaptive_clinical_trial
Interest rate insurance protects the holder of a variable rate mortgage or loan from rising interest rates. It is generally offered independently of the original borrowing and typically as an alternative to a remortgage onto a fixed rate. As the insurance policy protects only against the risk of the repayments rising because of interest rates (and not of the borrower defaulting on repayments) there is no requirement for the insurer to check the credit status of the purchaser or the value of any secured asset. The absence of arrangement and valuation fees, bank and legal charges means that interest rate insurance can be cheaper to provide than a remortgage. The absence of credit checks and valuations means it can be made available to all holders of a variable rate loan. As interest rate insurance protects the holder from rising interest rates but does not raise their initial pay rate, if interest rates fall, the policyholder will see a benefit in reduced payments on their mortgage or loan when compared to a fixed rate alternative. == History (UK) == Monetary Policy Committee member Professor David Miles first highlighted interest rate insurance in the Miles Review in 2004 commissioned by Gordon Brown. Professor Miles suggested that it would provide greater security in housing finance. In the 2008 Budget, HM Treasury announced that the industry was ready to launch such a product. In July 2008 MarketGuard launched an interest rate insurance policy RateGuard. == References ==
Wikipedia/Interest_rate_insurance
In time series analysis, the partial autocorrelation function (PACF) gives the partial correlation of a stationary time series with its own lagged values, regressed the values of the time series at all shorter lags. It contrasts with the autocorrelation function, which does not control for other lags. This function plays an important role in data analysis aimed at identifying the extent of the lag in an autoregressive (AR) model. The use of this function was introduced as part of the Box–Jenkins approach to time series modelling, whereby plotting the partial autocorrelative functions one could determine the appropriate lags p in an AR (p) model or in an extended ARIMA (p,d,q) model. == Definition == Given a time series z t {\displaystyle z_{t}} , the partial autocorrelation of lag k {\displaystyle k} , denoted ϕ k , k {\displaystyle \phi _{k,k}} , is the autocorrelation between z t {\displaystyle z_{t}} and z t + k {\displaystyle z_{t+k}} with the linear dependence of z t {\displaystyle z_{t}} on z t + 1 {\displaystyle z_{t+1}} through z t + k − 1 {\displaystyle z_{t+k-1}} removed. Equivalently, it is the autocorrelation between z t {\displaystyle z_{t}} and z t + k {\displaystyle z_{t+k}} that is not accounted for by lags 1 {\displaystyle 1} through k − 1 {\displaystyle k-1} , inclusive. ϕ 1 , 1 = corr ⁡ ( z t + 1 , z t ) , for k = 1 , {\displaystyle \phi _{1,1}=\operatorname {corr} (z_{t+1},z_{t}),{\text{ for }}k=1,} ϕ k , k = corr ⁡ ( z t + k − z ^ t + k , z t − z ^ t ) , for k ≥ 2 , {\displaystyle \phi _{k,k}=\operatorname {corr} (z_{t+k}-{\hat {z}}_{t+k},\,z_{t}-{\hat {z}}_{t}),{\text{ for }}k\geq 2,} where z ^ t + k {\displaystyle {\hat {z}}_{t+k}} and z ^ t {\displaystyle {\hat {z}}_{t}} are linear combinations of { z t + 1 , z t + 2 , . . . , z t + k − 1 } {\displaystyle \{z_{t+1},z_{t+2},...,z_{t+k-1}\}} that minimize the mean squared error of z t + k {\displaystyle z_{t+k}} and z t {\displaystyle z_{t}} respectively. For stationary processes, the coefficients in z ^ t + k {\displaystyle {\hat {z}}_{t+k}} and z ^ t {\displaystyle {\hat {z}}_{t}} are the same, but reversed: z ^ t + k = β 1 z t + k − 1 + ⋯ + β k − 1 z t + 1 and z ^ t = β 1 z t + 1 + ⋯ + β k − 1 z t + k − 1 . {\displaystyle {\hat {z}}_{t+k}=\beta _{1}z_{t+k-1}+\cdots +\beta _{k-1}z_{t+1}\qquad {\text{and}}\qquad {\hat {z}}_{t}=\beta _{1}z_{t+1}+\cdots +\beta _{k-1}z_{t+k-1}.} == Calculation == The theoretical partial autocorrelation function of a stationary time series can be calculated by using the Durbin–Levinson Algorithm: ϕ n , n = ρ ( n ) − ∑ k = 1 n − 1 ϕ n − 1 , k ρ ( n − k ) 1 − ∑ k = 1 n − 1 ϕ n − 1 , k ρ ( k ) {\displaystyle \phi _{n,n}={\frac {\rho (n)-\sum _{k=1}^{n-1}\phi _{n-1,k}\rho (n-k)}{1-\sum _{k=1}^{n-1}\phi _{n-1,k}\rho (k)}}} where ϕ n , k = ϕ n − 1 , k − ϕ n , n ϕ n − 1 , n − k {\displaystyle \phi _{n,k}=\phi _{n-1,k}-\phi _{n,n}\phi _{n-1,n-k}} for 1 ≤ k ≤ n − 1 {\displaystyle 1\leq k\leq n-1} and ρ ( n ) {\displaystyle \rho (n)} is the autocorrelation function. The formula above can be used with sample autocorrelations to find the sample partial autocorrelation function of any given time series. == Examples == The following table summarizes the partial autocorrelation function of different models: The behavior of the partial autocorrelation function mirrors that of the autocorrelation function for autoregressive and moving-average models. For example, the partial autocorrelation function of an AR(p) series cuts off after lag p similar to the autocorrelation function of an MA(q) series with lag q. In addition, the autocorrelation function of an AR(p) process tails off just like the partial autocorrelation function of an MA(q) process. == Autoregressive model identification == Partial autocorrelation is a commonly used tool for identifying the order of an autoregressive model. As previously mentioned, the partial autocorrelation of an AR(p) process is zero at lags greater than p. If an AR model is determined to be appropriate, then the sample partial autocorrelation plot is examined to help identify the order. The partial autocorrelation of lags greater than p for an AR(p) time series are approximately independent and normal with a mean of 0. Therefore, a confidence interval can be constructed by dividing a selected z-score by n {\displaystyle {\sqrt {n}}} . Lags with partial autocorrelations outside of the confidence interval indicate that the AR model's order is likely greater than or equal to the lag. Plotting the partial autocorrelation function and drawing the lines of the confidence interval is a common way to analyze the order of an AR model. To evaluate the order, one examines the plot to find the lag after which the partial autocorrelations are all within the confidence interval. This lag is determined to likely be the AR model's order. == References ==
Wikipedia/Partial_autocorrelation_function
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice. Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties. The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time. UDDs were developed in the 1940s by several research groups independently. The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties, and new and better estimation methods. UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures, and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research. They are also considered a viable choice for Phase I clinical trials. == Mathematical description == === Definition === Let n {\displaystyle n} be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} , are chosen from a discrete, finite set of M {\displaystyle M} increasing dose levels X = { d 1 , … , d M : d 1 < ⋯ < d M } . {\displaystyle {\mathcal {X}}=\left\{d_{1},\ldots ,d_{M}:\ d_{1}<\cdots <d_{M}\right\}.} Furthermore, if X i = d m {\displaystyle X_{i}=d_{m}} , then X i + 1 ∈ { d m − 1 , d m , d m + 1 } , {\displaystyle X_{i+1}\in \{d_{m-1},d_{m},d_{m+1}\},} according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted Y 1 , … , Y n ∈ { 0 , 1 } ; {\displaystyle Y_{1},\ldots ,Y_{n}\in \left\{0,1\right\};} hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} into a random walk over X {\displaystyle {\mathcal {X}}} . Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above. Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, x {\displaystyle x} , is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing x {\displaystyle x} . The goal of dose-finding experiments is to estimate the dose x {\displaystyle x} (on a continuous scale) that would trigger positive responses at a pre-specified target rate Γ = P { Y = 1 ∣ X = x } , Γ ∈ ( 0 , 1 ) {\displaystyle \Gamma =P\left\{Y=1\mid X=x\right\},\ \ \Gamma \in (0,1)} ; often known as the "target dose". This problem can be also expressed as estimation of the quantile F − 1 ( Γ ) {\displaystyle F^{-1}(\Gamma )} of a cumulative distribution function describing the dose-toxicity curve F ( x ) {\displaystyle F(x)} . The density function f ( x ) {\displaystyle f(x)} associated with F ( x ) {\displaystyle F(x)} is interpretable as the distribution of response thresholds of the population under study. === Transition probability matrix === Given that a subject receives dose d m {\displaystyle d_{m}} , denote the probability that the next subject receives dose d m − 1 , d m {\displaystyle d_{m-1},d_{m}} , or d m + 1 {\displaystyle d_{m+1}} , as p m , m − 1 , p m m {\displaystyle p_{m,m-1},p_{mm}} or p m , m + 1 {\displaystyle p_{m,m+1}} , respectively. These transition probabilities obey the constraints p m , m − 1 + p m m + p m , m + 1 = 1 {\displaystyle p_{m,m-1}+p_{mm}+p_{m,m+1}=1} and the boundary conditions p 1 , 0 = p M , M + 1 = 0 {\displaystyle p_{1,0}=p_{M,M+1}=0} . Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of F ( x ) {\displaystyle F(x)} . Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon ( X i , Y i ) {\displaystyle \left(X_{i},Y_{i}\right)} and through them upon F ( x ) {\displaystyle F(x)} (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) P {\displaystyle \mathbf {P} } : P = ( p 11 p 12 0 ⋯ ⋯ 0 p 21 p 22 p 23 0 ⋱ ⋮ 0 ⋱ ⋱ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ 0 ⋮ ⋱ 0 p M − 1 , M − 2 p M − 1 , M − 1 p M − 1 , M 0 ⋯ ⋯ 0 p M , M − 1 p M M ) . {\displaystyle {\bf {{P}=\left({\begin{array}{cccccc}p_{11}&p_{12}&0&\cdots &\cdots &0\\p_{21}&p_{22}&p_{23}&0&\ddots &\vdots \\0&\ddots &\ddots &\ddots &\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &\ddots &0\\\vdots &\ddots &0&p_{M-1,M-2}&p_{M-1,M-1}&p_{M-1,M}\\0&\cdots &\cdots &0&p_{M,M-1}&p_{MM}\\\end{array}}\right).}}} === Balance point === Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose x ∗ {\displaystyle x^{*}} that can be calculated from the transition rules, when those are expressed as a function of F ( x ) {\displaystyle F(x)} . This dose has often been confused with the experiment's formal target F − 1 ( Γ ) {\displaystyle F^{-1}(\Gamma )} , and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while x ∗ {\displaystyle x^{*}} , known as the "balance point", is approximately where the UDD's random walk revolves around. === Stationary distribution of dose allocations === Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, π {\displaystyle \pi } , once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by π {\displaystyle \pi } . According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate. Numerical studies suggest that it would typically take between 2 / M {\displaystyle 2/M} and 4 / M {\displaystyle 4/M} subjects for the effect to wear off nearly completely. π {\displaystyle \pi } is also the asymptotic distribution of cumulative dose allocations. UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of π {\displaystyle \pi } ) will be one of the two doses closest to the balance point x ∗ {\displaystyle x^{*}} . If x ∗ {\displaystyle x^{*}} is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to x ∗ {\displaystyle x^{*}} in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely. == Common UDDs == === Original ("simple" or "classical") UDD === The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are p m , m + 1 = P { Y i = 0 | X i = d m } = 1 − F ( d m ) ; p m , m − 1 = P { Y i = 1 | X i = d m } = F ( d m ) . {\displaystyle {\begin{array}{rl}p_{m,m+1}&=P\{Y_{i}=0|X_{i}=d_{m}\}=1-F(d_{m});\\p_{m,m-1}&=P\{Y_{i}=1|X_{i}=d_{m}\}=F(d_{m}).\end{array}}} We use the original UDD as an example for calculating the balance point x ∗ {\displaystyle x^{*}} . The design's 'up', 'down' functions are p ( x ) = 1 − F ( x ) , q ( x ) = F ( x ) . {\displaystyle p(x)=1-F(x),q(x)=F(x).} We equate them to find F ∗ {\displaystyle F^{*}} : 1 − F ∗ = F ∗ ⟶ F ∗ = 0.5. {\displaystyle 1-F^{*}=F^{*}\ \longrightarrow \ F^{*}=0.5.} The "classical" UDD is designed to find the median threshold. This is a case where F ∗ = Γ . {\displaystyle F^{*}=\Gamma .} The "classical" UDD can be seen as a special case of each of the more versatile designs described below. === Durham and Flournoy's biased coin design === This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability b = P { heads } . {\displaystyle b=P\{{\textrm {heads}}\}.} This biased-coin design (BCD) has two "flavors", one for F ∗ > 0.5 {\displaystyle F^{*}>0.5} and one for F ∗ < 0.5 , {\displaystyle F^{*}<0.5,} whose rules are shown below: X i + 1 = d m + 1 if Y i = 0 & 'heads' ; d m − 1 if Y _ i = 1 ; d m if Y i = 0 & 'tails' . {\displaystyle X_{i+1}={\begin{array}{ll}d_{m+1}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'heads'}};\\d_{m-1}&{\textrm {if}}\ \ Y\_i=1;\\d_{m}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'tails'}}.\\\end{array}}} The heads probability b {\displaystyle b} can take any value in [ 0 , 1 ] {\displaystyle [0,1]} . The balance point is b ( 1 − F ∗ ) = F ∗ F ∗ = b 1 + b ∈ [ 0 , 0.5 ] . {\displaystyle {\begin{array}{rcl}b\left(1-F^{*}\right)&=&F^{*}\\F^{*}&=&{\frac {b}{1+b}}\in [0,0.5].\end{array}}} The BCD balance point can made identical to a target rate F − 1 ( Γ ) {\displaystyle F^{-1}(\Gamma )} by setting the heads probability to b = Γ / ( 1 − Γ ) {\displaystyle b=\Gamma /(1-\Gamma )} . For example, for Γ = 0.3 {\displaystyle \Gamma =0.3} set b = 3 / 7 {\displaystyle b=3/7} . Setting b = 1 {\displaystyle b=1} makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD. === Group (cohort) UDDs === Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size s {\displaystyle s} rather than to individuals. X i {\displaystyle X_{i}} becomes the dose given to cohort i {\displaystyle i} , and Y i {\displaystyle Y_{i}} is the number of positive responses in the i {\displaystyle i} -th cohort, rather than a binary outcome. Given that the i {\displaystyle i} -th cohort is treated at X i = d m {\displaystyle X_{i}=d_{m}} on the interior of X {\displaystyle {\mathcal {X}}} the i + 1 {\displaystyle i+1} -th cohort is assigned to X i + 1 = { d m + 1 if Y i ≤ l ; d m − 1 if Y i ≥ u ; d m if l < Y i < u . {\displaystyle X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i}\leq l;\\d_{m-1}&{\textrm {if}}\ \ Y_{i}\geq u;\\d_{m}&{\textrm {if}}\ \ l<Y_{i}<u.\end{cases}}} Y i {\displaystyle Y_{i}} follow a binomial distribution conditional on X i {\displaystyle X_{i}} , with parameters s {\displaystyle s} and F ( X i ) {\displaystyle F(X_{i})} . The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if u = l + 1 {\displaystyle u=l+1} ). A specific choice of parameters can be abbreviated as GUD ( s , l , u ) . {\displaystyle _{(s,l,u)}.} Nominally, group UDDs generate s {\displaystyle s} -order random walks, since the s {\displaystyle s} most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies: Symmetric designs with l + u = s {\displaystyle l+u=s} (e.g., GUD ( 2 , 0 , 2 ) {\displaystyle _{(2,0,2)}} ) target the median. The family GUD ( s , 0 , 1 ) , {\displaystyle _{(s,0,1)},} encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at x {\displaystyle x} is ( 1 − F ( x ) ) s , {\displaystyle \left(1-F(x)\right)^{s},} and since this design does not allow for remaining at the same dose, at the balance point it will be exactly 1 / 2 {\displaystyle 1/2} . Therefore, F ∗ = 1 − ( 1 2 ) 1 / s . {\displaystyle F^{*}=1-\left({\frac {1}{2}}\right)^{1/s}.} With s = 2 , 3 , 4 {\displaystyle s=2,3,4} would be associated with F ∗ ≈ 0.293 , 0.206 {\displaystyle F^{*}\approx 0.293,0.206} and 0.159 {\displaystyle 0.159} , respectively. The mirror-image family GUD ( s , s − 1 , s ) {\displaystyle _{(s,s-1,s)}} has its balance points at one minus these probabilities. For general group UDDs, the balance point can be calculated only numerically, by finding the dose x ∗ {\displaystyle x^{*}} with toxicity rate F ∗ {\displaystyle F^{*}} such that ∑ r = u s ( s r ) ( F ∗ ) r ( 1 − F ∗ ) s − r = ∑ t = 0 l ( s t ) ( F ∗ ) t ( 1 − F ∗ ) s − t . {\displaystyle \sum _{r=u}^{s}\left({\begin{array}{c}s\\r\\\end{array}}\right)\left(F^{*}\right)^{r}(1-F^{*})^{s-r}=\sum _{t=0}^{l}\left({\begin{array}{c}s\\t\\\end{array}}\right)\left(F^{*}\right)^{t}(1-F^{*})^{s-t}.} Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for F ∗ {\displaystyle F^{*}} . === === k {\displaystyle k} -in-a-row (or "transformed" or "geometric") UDD This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963, and proliferated by him and colleagues shortly thereafter to psychophysics, where it remains one of the standard methods to find sensory thresholds. Wetherill called it "transformed" UDD; Misrak Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s; and in the 2000s the more straightforward name " k {\displaystyle k} -in-a-row" UDD was adopted. The design's rules are deceptively simple: X i + 1 = { d m + 1 if Y i − k + 1 = ⋯ = Y i = 0 , all observed at d m ; d m − 1 if Y i = 1 ; d m otherwise , {\displaystyle X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i-k+1}=\cdots =Y_{i}=0,\ \ {\textrm {all}}\ {\textrm {observed}}\ {\textrm {at}}\ \ d_{m};\\d_{m-1}&{\textrm {if}}\ \ Y_{i}=1;\\d_{m}&{\textrm {otherwise}},\end{cases}}} Every dose escalation requires k {\displaystyle k} non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD ( s , 0 , 1 ) {\displaystyle _{(s,0,1)}} described above, and indeed shares the same balance point. The difference is that k {\displaystyle k} -in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending. The method used in sensory studies is actually the mirror-image of the one defined above, with k {\displaystyle k} successive responses required for a de-escalation and only one non-response for escalation, yielding F ∗ ≈ 0.707 , 0.794 , 0.841 , … {\displaystyle F^{*}\approx 0.707,0.794,0.841,\ldots } for k = 2 , 3 , 4 , … {\displaystyle k=2,3,4,\ldots } . k {\displaystyle k} -in-a-row generates a k {\displaystyle k} -th order random walk because knowledge of the last k {\displaystyle k} responses might be needed. It can be represented as a first-order chain with M k {\displaystyle Mk} states, or as a Markov chain with M {\displaystyle M} levels, each having k {\displaystyle k} internal states labeled 0 {\displaystyle 0} to k − 1 {\displaystyle k-1} The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level m {\displaystyle m} , are all assigned the same dose d m {\displaystyle d_{m}} . Either way, the TPM is M k × M k {\displaystyle Mk\times Mk} (or more precisely, [ ( M − 1 ) k + 1 ) ] × [ ( M − 1 ) k + 1 ) ] {\displaystyle \left[(M-1)k+1)\right]\times \left[(M-1)k+1)\right]} , because the internal counter is meaningless at the highest dose) - and it is not tridiagonal. Here is the expanded k {\displaystyle k} -in-a-row TPM with k = 2 {\displaystyle k=2} and M = 5 {\displaystyle M=5} , using the abbreviation F m ≡ F ( d m ) . {\displaystyle F_{m}\equiv F\left(d_{m}\right).} Each level's internal states are adjacent to each other. [ F 1 1 − F 1 0 0 0 0 0 0 0 F 1 0 1 − F 1 0 0 0 0 0 0 F 2 0 0 1 − F 2 0 0 0 0 0 F 2 0 0 0 1 − F 2 0 0 0 0 0 0 F 3 0 0 1 − F 3 0 0 0 0 0 F 3 0 0 0 1 − F 3 0 0 0 0 0 0 F 4 0 0 1 − F 4 0 0 0 0 0 F 4 0 0 0 1 − F 4 0 0 0 0 0 0 F 5 0 1 − F 5 ] . {\displaystyle {\begin{bmatrix}F_{1}&1-F_{1}&0&0&0&0&0&0&0\\F_{1}&0&1-F_{1}&0&0&0&0&0&0\\F_{2}&0&0&1-F_{2}&0&0&0&0&0\\F_{2}&0&0&0&1-F_{2}&0&0&0&0\\0&0&F_{3}&0&0&1-F_{3}&0&0&0\\0&0&F_{3}&0&0&0&1-F_{3}&0&0\\0&0&0&0&F_{4}&0&0&1-F_{4}&0\\0&0&0&0&F_{4}&0&0&0&1-F_{4}\\0&0&0&0&0&0&F_{5}&0&1-F_{5}\\\end{bmatrix}}.} k {\displaystyle k} -in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, k {\displaystyle k} is chosen to aim close to the target rate, e.g., k = 2 {\displaystyle k=2} for studies targeting the 30th percentile, and k = 3 {\displaystyle k=3} for studies targeting the 20th percentile. == Estimating the target dose == Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from π {\displaystyle \pi } , since the latter is centered roughly around x ∗ . {\displaystyle x^{*}.} The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average. In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice. By contrast, regression estimators attempt to approximate the curve y = F ( x ) {\displaystyle y=F(x)} describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses d m {\displaystyle d_{m}} on the horizontal axis, and the observed toxicity frequencies, F ^ m = ∑ i = 1 n Y i I [ X i = d m ] ∑ i = 1 n I [ X i = d m ] , m = 1 , … , M , {\displaystyle {\hat {F}}_{m}={\frac {\sum _{i=1}^{n}Y_{i}I\left[X_{i}=d_{m}\right]}{\sum _{i=1}^{n}I\left[X_{i}=d_{m}\right]}},\ m=1,\ldots ,M,} on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses y = Γ . {\displaystyle y=\Gamma .} Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data. More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general. Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust. The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications. == References ==
Wikipedia/Up-and-Down_Designs
In statistics, first-hitting-time models are simplified models that estimate the amount of time that passes before some random or stochastic process crosses a barrier, boundary or reaches a specified state, termed the first hitting time, or the first passage time. Accurate models give insight into the physical system under observation, and have been the topic of research in very diverse fields, from economics to ecology. The idea that a first hitting time of a stochastic process might describe the time to occurrence of an event has a long history, starting with an interest in the first passage time of Wiener diffusion processes in economics and then in physics in the early 1900s. Modeling the probability of financial ruin as a first passage time was an early application in the field of insurance. An interest in the mathematical properties of first-hitting-times and statistical models and methods for analysis of survival data appeared steadily between the middle and end of the 20th century. First-hitting-time models are a sub-class of survival models. == Examples == A common example of a first-hitting-time model is a ruin problem, such as Gambler's ruin. In this example, an entity (often described as a gambler or an insurance company) has an amount of money which varies randomly with time, possibly with some drift. The model considers the event that the amount of money reaches 0, representing bankruptcy. The model can answer questions such as the probability that this occurs within finite time, or the mean time until which it occurs. First-hitting-time models can be applied to expected lifetimes, of patients or mechanical devices. When the process reaches an adverse threshold state for the first time, the patient dies, or the device breaks down. The time for a particle to escape through a narrow opening in a confined space is termed the narrow escape problem, and is commonly studied in biophysics and cellular biology. == First passage time of a 1D Brownian particle == One of the simplest and omnipresent stochastic systems is that of the Brownian particle in one dimension. This system describes the motion of a particle which moves stochastically in one dimensional space, with equal probability of moving to the left or to the right. Given that Brownian motion is used often as a tool to understand more complex phenomena, it is important to understand the probability of a first passage time of the Brownian particle of reaching some position distant from its start location. This is done through the following means. The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses outward over time. It is analogous to say, cream in a cup of coffee if the cream was all contained within some small location initially. After a long time the cream has diffused throughout the entire drink evenly.) Namely, ∂ p ( x , t ∣ x 0 ) ∂ t = D ∂ 2 p ( x , t ∣ x 0 ) ∂ x 2 , {\displaystyle {\frac {\partial p(x,t\mid x_{0})}{\partial t}}=D{\frac {\partial ^{2}p(x,t\mid x_{0})}{\partial x^{2}}},} given the initial condition p ( x , t = 0 ∣ x 0 ) = δ ( x − x 0 ) {\displaystyle p(x,t={0}\mid x_{0})=\delta (x-x_{0})} ; where x ( t ) {\displaystyle x(t)} is the position of the particle at some given time, x 0 {\displaystyle x_{0}} is the tagged particle's initial position, and D {\displaystyle D} is the diffusion constant with the S.I. units m 2 s − 1 {\displaystyle m^{2}s^{-1}} (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the rate of change over time in the probability of finding the particle at x ( t ) {\displaystyle x(t)} position depends on the deceleration over distance of such probability at that position. It can be shown that the one-dimensional PDF is p ( x , t ; x 0 ) = 1 4 π D t exp ⁡ ( − ( x − x 0 ) 2 4 D t ) . {\displaystyle p(x,t;x_{0})={\frac {1}{\sqrt {4\pi Dt}}}\exp \left(-{\frac {(x-x_{0})^{2}}{4Dt}}\right).} This states that the probability of finding the particle at x ( t ) {\displaystyle x(t)} is Gaussian, and the width of the Gaussian is time dependent. More specifically the Full Width at Half Maximum (FWHM) – technically, this is actually the Full Duration at Half Maximum as the independent variable is time – scales like F W H M ∼ t . {\displaystyle {\rm {FWHM}}\sim {\sqrt {t}}.} Using the PDF one is able to derive the average of a given function, L {\displaystyle L} , at time t {\displaystyle t} : ⟨ L ( t ) ⟩ ≡ ∫ − ∞ ∞ L ( x , t ) p ( x , t ) d x , {\displaystyle \langle L(t)\rangle \equiv \int _{-\infty }^{\infty }L(x,t)p(x,t)\,dx,} where the average is taken over all space (or any applicable variable). The First Passage Time Density (FPTD) is the probability that a particle has first reached a point x c {\displaystyle x_{c}} at exactly time t {\displaystyle t} (not at some time during the interval up to t {\displaystyle t} ). This probability density is calculable from the Survival probability (a more common probability measure in statistics). Consider the absorbing boundary condition p ( x c , t ) = 0 {\displaystyle p(x_{c},t)=0} (The subscript c for the absorption point x c {\displaystyle x_{c}} is an abbreviation for cliff used in many texts as an analogy to an absorption point). The PDF satisfying this boundary condition is given by p ( x , t ; x 0 , x c ) = 1 4 π D t ( exp ⁡ ( − ( x − x 0 ) 2 4 D t ) − exp ⁡ ( − ( x − ( 2 x c − x 0 ) ) 2 4 D t ) ) , {\displaystyle p(x,t;x_{0},x_{c})={\frac {1}{\sqrt {4\pi Dt}}}\left(\exp \left(-{\frac {(x-x_{0})^{2}}{4Dt}}\right)-\exp \left(-{\frac {(x-(2x_{c}-x_{0}))^{2}}{4Dt}}\right)\right),} for x < x c {\displaystyle x<x_{c}} . The survival probability, the probability that the particle has remained at a position x < x c {\displaystyle x<x_{c}} for all times up to t {\displaystyle t} , is given by S ( t ) ≡ ∫ − ∞ x c p ( x , t ; x 0 , x c ) d x = erf ⁡ ( x c − x 0 2 D t ) , {\displaystyle S(t)\equiv \int _{-\infty }^{x_{c}}p(x,t;x_{0},x_{c})\,dx=\operatorname {erf} \left({\frac {x_{c}-x_{0}}{2{\sqrt {Dt}}}}\right),} where erf {\displaystyle \operatorname {erf} } is the error function. The relation between the Survival probability and the FPTD is as follows: the probability that a particle has reached the absorption point between times t {\displaystyle t} and t + d t {\displaystyle t+dt} is f ( t ) d t = S ( t ) − S ( t + d t ) {\displaystyle f(t)\,dt=S(t)-S(t+dt)} . If one uses the first-order Taylor approximation, the definition of the FPTD follows): f ( t ) = − ∂ S ( t ) ∂ t . {\displaystyle f(t)=-{\frac {\partial S(t)}{\partial t}}.} By using the diffusion equation and integrating, the explicit FPTD is f ( t ) ≡ | x c − x 0 | 4 π D t 3 exp ⁡ ( − ( x c − x 0 ) 2 4 D t ) . {\displaystyle f(t)\equiv {\frac {|x_{c}-x_{0}|}{\sqrt {4\pi Dt^{3}}}}\exp \left(-{\frac {(x_{c}-x_{0})^{2}}{4Dt}}\right).} The first-passage time for a Brownian particle therefore follows a Lévy distribution. For t ≫ ( x c − x 0 ) 2 4 D {\displaystyle t\gg {\frac {(x_{c}-x_{0})^{2}}{4D}}} , it follows from above that f ( t ) = Δ x 4 π D t 3 ∼ t − 3 / 2 , {\displaystyle f(t)={\frac {\Delta x}{\sqrt {4\pi Dt^{3}}}}\sim t^{-3/2},} where Δ x ≡ | x c − x 0 | {\displaystyle \Delta x\equiv |x_{c}-x_{0}|} . This equation states that the probability for a Brownian particle achieving a first passage at some long time (defined in the paragraph above) becomes increasingly small, but is always finite. The first moment of the FPTD diverges (as it is a so-called heavy-tailed distribution), therefore one cannot calculate the average FPT, so instead, one can calculate the typical time, the time when the FPTD is at a maximum ( ∂ f / ∂ t = 0 {\displaystyle \partial f/\partial t=0} ), i.e., τ t y = Δ x 2 6 D . {\displaystyle \tau _{\rm {ty}}={\frac {\Delta x^{2}}{6D}}.} == First-hitting-time applications in many families of stochastic processes == First hitting times are central features of many families of stochastic processes, including Poisson processes, Wiener processes, gamma processes, and Markov chains, to name but a few. The state of the stochastic process may represent, for example, the strength of a physical system, the health of an individual, or the financial condition of a business firm. The system, individual or firm fails or experiences some other critical endpoint when the process reaches a threshold state for the first time. The critical event may be an adverse event (such as equipment failure, congested heart failure, or lung cancer) or a positive event (such as recovery from illness, discharge from hospital stay, child birth, or return to work after traumatic injury). The lapse of time until that critical event occurs is usually interpreted generically as a ‘survival time’. In some applications, the threshold is a set of multiple states so one considers competing first hitting times for reaching the first threshold in the set, as is the case when considering competing causes of failure in equipment or death for a patient. == Threshold regression: first-hitting-time regression == Practical applications of theoretical models for first hitting times often involve regression structures. When first hitting time models are equipped with regression structures, accommodating covariate data, we call such regression structure threshold regression. The threshold state, parameters of the process, and even time scale may depend on corresponding covariates. Threshold regression as applied to time-to-event data has emerged since the start of this century and has grown rapidly, as described in a 2006 survey article and its references. Connections between threshold regression models derived from first hitting times and the ubiquitous Cox proportional hazards regression model was investigated in. Applications of threshold regression range over many fields, including the physical and natural sciences, engineering, social sciences, economics and business, agriculture, health and medicine. == Latent vs observable == In many real world applications, a first-hitting-time (FHT) model has three underlying components: (1) a parent stochastic process { X ( t ) } {\displaystyle \{X(t)\}\,\,} , which might be latent, (2) a threshold (or the barrier) and (3) a time scale. The first hitting time is defined as the time when the stochastic process first reaches the threshold. It is very important to distinguish whether the sample path of the parent process is latent (i.e., unobservable) or observable, and such distinction is a characteristic of the FHT model. By far, latent processes are most common. To give an example, we can use a Wiener process { X ( t ) , t ≥ 0 } {\displaystyle \{X(t),t\geq 0\,\}\,} as the parent stochastic process. Such Wiener process can be defined with the mean parameter μ {\displaystyle {\mu }\,\,} , the variance parameter σ 2 {\displaystyle {\sigma ^{2}}\,\,} , and the initial value X ( 0 ) = x 0 > 0 {\displaystyle X(0)=x_{0}>0\,} . == Operational or analytical time scale == The time scale of the stochastic process may be calendar or clock time or some more operational measure of time progression, such as mileage of a car, accumulated wear and tear on a machine component or accumulated exposure to toxic fumes. In many applications, the stochastic process describing the system state is latent or unobservable and its properties must be inferred indirectly from censored time-to-event data and/or readings taken over time on correlated processes, such as marker processes. The word ‘regression’ in threshold regression refers to first-hitting-time models in which one or more regression structures are inserted into the model in order to connect model parameters to explanatory variables or covariates. The parameters given regression structures may be parameters of the stochastic process, the threshold state and/or the time scale itself. == See also == Narrow escape problem Survival analysis Proportional hazards models == References == Whitmore, G. A. (1986). "First passage time models for duration data regression structures and competing risks". The Statistician. 35 (2): 207–219. doi:10.2307/2987525. JSTOR 2987525. Whitmore, G. A. (1995). "Estimating degradation by a Wiener diffusion process subject to measurement error". Lifetime Data Analysis. 1 (3): 307–319. doi:10.1007/BF00985762. PMID 9385107. S2CID 28077957. Whitmore, G. A.; Crowder, M. J.; Lawless, J. F. (1998). "Failure inference from a marker process based on a bivariate Wiener model". Lifetime Data Analysis. 4 (3): 229–251. doi:10.1023/A:1009617814586. PMID 9787604. S2CID 43301120. Redner, S. (2001). A Guide to First-Passage Processes. Cambridge University Press. ISBN 0-521-65248-0. Lee, M.-L. T.; Whitmore, G. A. (2006). "Threshold regression for survival analysis: Modeling event times by a stochastic process". Statistical Science. 21 (4): 501–513. arXiv:0708.0346. doi:10.1214/088342306000000330. S2CID 88518120. Bachelier, L. (1900). "Théorie de la Spéculation". Annales Scientifiques de l'École Normale Supérieure. 3 (17): 21–86. doi:10.24033/asens.476. Schrodinger, E. (1915). "Zur Theorie der Fall-und Steigversuche an Teilchen mit Brownscher Bewegung". Physikalische Zeitschrift. 16: 289–295. Smoluchowski, M. V. (1915). "Notiz über die Berechnung der Brownschen Molekularbewegung bei der Ehrenhaft-millikanschen Versuchsanordnung". Physikalische Zeitschrift. 16: 318–321. Lundberg, F. (1903). Approximerad Framställning av Sannolikehetsfunktionen, Återförsäkering av Kollektivrisker. Almqvist & Wiksell, Uppsala. Tweedie, M. C. K. (1945). "Inverse statistical variates". Nature. 155 (3937): 453. Bibcode:1945Natur.155..453T. doi:10.1038/155453a0. Tweedie, M. C. K. (1957). "Statistical properties of inverse Gaussian distributions – I". Annals of Mathematical Statistics. 28 (2): 362–377. doi:10.1214/aoms/1177706964. Tweedie, M. C. K. (1957). "Statistical properties of inverse Gaussian distributions – II". Annals of Mathematical Statistics. 28 (3): 696–705. doi:10.1214/aoms/1177706881. Whitmore, G. A.; Neufeldt, A. H. (1970). "An application of statistical models in mental health research". Bull. Math. Biophys. 32 (4): 563–579. doi:10.1007/BF02476771. PMID 5513393. Lancaster, T. (1972). "A stochastic model for the duration of a strike". J. Roy. Statist. Soc. Ser. A. 135 (2): 257–271. doi:10.2307/2344321. JSTOR 2344321. Cox, D. R. (1972). "Regression models and life tables (with discussion)". J R Stat Soc Ser B. 187: 187–230. doi:10.1111/j.2517-6161.1972.tb00899.x. Lee, M.-L. T.; Whitmore, G. A. (2010). "Threshold Proportional hazards and threshold regression: their theoretical and practical connections". Lifetime Data Analysis. 16 (2): 196–214. doi:10.1007/s10985-009-9138-0. PMC 6447409. PMID 19960249. Aaron, S. D.; Ramsay, T.; Vandemheen, K.; Whitmore, G. A. (2010). "A threshold regression model for recurrent exacerbations in chronic obstructive pulmonary disease". Journal of Clinical Epidemiology. 63 (12): 1324–1331. doi:10.1016/j.jclinepi.2010.05.007. PMID 20800447. Chambaz, A.; Choudat, D.; Huber, C.; Pairon, J.; Van der Lann, M. J. (2014). "Analysis of occupational exposure to asbestos based on threshold regression modeling of case-control data". Biostatistics. 15 (2): 327–340. doi:10.1093/biostatistics/kxt042. PMID 24115271. Aaron, S. D.; Stephenson, A. L.; Cameron, D. W.; Whitmore, G. A. (2015). "A statistical model to predict one-year risk of death in patients with cystic fibrosis". Journal of Clinical Epidemiology. 68 (11): 1336–1345. doi:10.1016/j.jclinepi.2014.12.010. PMID 25655532. He, X.; Whitmore, G. A.; Loo, G. Y.; Hochberg, M. C.; Lee, M.-L. T. (2015). "A model for time to fracture with a shock stream superimposed on progressive degradation: the Study of Osteoporotic Fractures". Statistics in Medicine. 34 (4): 652–663. doi:10.1002/sim.6356. PMC 4314426. PMID 25376757. Hou, W.-H.; Chuang, H.-Y.; Lee, M.-L. T. (2016). "A threshold regression model to predict return to work after traumatic limb injury". Injury. 47 (2): 483–489. doi:10.1016/j.injury.2015.11.032. PMID 26746983.
Wikipedia/First-hitting-time_model
Failure rate is the frequency with which any system or component fails, expressed in failures per unit of time. It thus depends on the system conditions, time interval, and total number of systems under study. It can describe electronic, mechanical, or biological systems, in fields such as systems and reliability engineering, medicine and biology, or insurance and finance. It is usually denoted by the Greek letter λ {\displaystyle \lambda } (lambda). In real-world applications, the failure probability of a system usually differs over time; failures occur more frequently in early-life ("burning in"), or as a system ages ("wearing out"). This is known as the bathtub curve, where the middle region is called the "useful life period". == Mean time between failures (MTBF) == The mean time between failures (MTBF, 1 / λ {\displaystyle 1/\lambda } ) is often reported instead of the failure rate, as numbers such as "2,000 hours" are more intuitive than numbers such as "0.0005 per hour". However, this is only valid if the failure rate λ ( t ) {\displaystyle \lambda (t)} is actually constant over time, such as within the flat region of the bathtub curve. In many cases where MTBF is quoted, it refers only to this region; thus it cannot be used to give an accurate calculation of the average lifetime of a system, as it ignores the "burn-in" and "wear-out" regions. MTBF appears frequently in engineering design requirements, and governs the frequency of required system maintenance and inspections. A similar ratio used in the transport industries, especially in railways and trucking, is "mean distance between failures" - allowing maintenance to be scheduled based on distance travelled, rather than at regular time intervals. == Mathematical definition == The simplest definition of failure rate λ {\displaystyle \lambda } is simply the number of failures Δ n {\displaystyle \Delta n} per time interval Δ t {\displaystyle \Delta t} : λ = Δ n Δ t {\displaystyle \lambda ={\frac {\Delta n}{\Delta t}}} which would depend on the number of systems under study, and the conditions over the time period. === Failures over time === To accurately model failures over time, a cumulative failure distribution, F ( t ) {\displaystyle F(t)} must be defined, which can be any cumulative distribution function (CDF) that gradually increases from 0 {\displaystyle 0} to 1 {\displaystyle 1} . In the case of many identical systems, this may be thought of as the fraction of systems failing over time t {\displaystyle t} , after all starting operation at time t = 0 {\displaystyle t=0} ; or in the case of a single system, as the probability of the system having its failure time T {\displaystyle T} before time t {\displaystyle t} : F ( t ) = P ⁡ ( T ≤ t ) . {\displaystyle F(t)=\operatorname {P} (T\leq t).} As CDFs are defined by integrating a probability density function, the failure probability density f ( t ) {\displaystyle f(t)} is defined such that: F ( t ) = ∫ 0 t f ( τ ) d τ {\displaystyle F(t)=\int _{0}^{t}f(\tau )\,d\tau \!} where τ {\displaystyle \tau } is a dummy integration variable. Here f ( t ) {\displaystyle f(t)} can be thought of as the instantaneous failure rate, i.e. the fraction of failures per unit time, as the size of the time interval Δ t {\displaystyle \Delta t} tends towards 0 {\displaystyle 0} : f ( t ) = lim Δ t → 0 + P ( t < T ≤ t + Δ t ) Δ t . {\displaystyle f(t)=\lim _{\Delta t\to 0^{+}}{\frac {P(t<T\leq t+\Delta t)}{\Delta t}}.} === Hazard rate === A concept closely-related but different to instantaneous failure rate f ( t ) {\displaystyle f(t)} is the hazard rate (or hazard function), h ( t ) {\displaystyle h(t)} . In the many-system case, this is defined as the proportional failure rate of the systems still functioning at time t {\displaystyle t} (as opposed to f ( t ) {\displaystyle f(t)} , which is the expressed as a proportion of the initial number of systems). For convenience we first define the reliability (or survival function) as: R ( t ) = 1 − F ( t ) {\displaystyle R(t)=1-F(t)} then the hazard rate is simply the instantaneous failure rate, scaled by the fraction of surviving systems at time t {\displaystyle t} : h ( t ) = f ( t ) R ( t ) {\displaystyle h(t)={\frac {f(t)}{R(t)}}} In the probabilistic sense, for a single system this can be interpreted as how much the conditional probability of failure time T {\displaystyle T} within the time interval t {\displaystyle t} to t + Δ t {\displaystyle t+\Delta t} changes, given that the system or component has already survived to time t {\displaystyle t} : h ( t ) = lim Δ t → 0 + P ( t < T ≤ t + Δ t ∣ T > t ) Δ t . {\displaystyle h(t)=\lim _{\Delta t\to 0^{+}}{\frac {P(t<T\leq t+\Delta t\mid T>t)}{\Delta t}}.} ==== Conversion to cumulative failure rate ==== To convert between h ( t ) {\displaystyle h(t)} and F ( t ) {\displaystyle F(t)} , we can solve the differential equation h ( t ) = f ( t ) R ( t ) = − R ′ ( t ) R ( t ) {\displaystyle h(t)={\frac {f(t)}{R(t)}}=-{\frac {R'(t)}{R(t)}}} with initial condition R ( 0 ) = 1 {\displaystyle R(0)=1} , which yields F ( t ) = 1 − exp ⁡ ( − ∫ 0 t h ( τ ) d τ ) . {\displaystyle F(t)=1-\exp {\left(-\int _{0}^{t}h(\tau )d\tau \right)}.} Thus for a collection of identical systems, only one of hazard rate h ( t ) {\displaystyle h(t)} , failure probability density f ( t ) {\displaystyle f(t)} , or cumulative failure distribution F ( t ) {\displaystyle F(t)} need be defined. Confusion can occur as the notation λ ( t ) {\displaystyle \lambda (t)} for "failure rate" often refers to the function h ( t ) {\displaystyle h(t)} rather than f ( t ) . {\displaystyle f(t).} === Constant hazard rate model === There are many possible functions that could be chosen to represent failure probability density f ( t ) {\displaystyle f(t)} or hazard rate h ( t ) {\displaystyle h(t)} , based on empirical or theoretical evidence, but the most common and easily-understandable choice is to set f ( t ) = λ e − λ t {\displaystyle f(t)=\lambda e^{-\lambda t}} , an exponential function with scaling constant λ {\displaystyle \lambda } . As seen in the figures above, this represents a gradually decreasing failure probability density. The CDF F ( t ) {\displaystyle F(t)} is then calculated as: F ( t ) = ∫ 0 t λ e − λ τ d τ = 1 − e − λ t , {\displaystyle F(t)=\int _{0}^{t}\lambda e^{-\lambda \tau }\,d\tau =1-e^{-\lambda t},\!} which can be seen to gradually approach 1 {\displaystyle 1} as t → ∞ , {\displaystyle t\to \infty ,} representing the fact that eventually all systems under study will fail. The hazard rate function is then: h ( t ) = f ( t ) R ( t ) = λ e − λ t e − λ t = λ . {\displaystyle h(t)={\frac {f(t)}{R(t)}}={\frac {\lambda e^{-\lambda t}}{e^{-\lambda t}}}=\lambda .} In other words, in this particular case only, the hazard rate is constant over time. This illustrates the difference in hazard rate and failure probability density - as the number of systems surviving at time t > 0 {\displaystyle t>0} gradually reduces, the total failure rate also reduces, but the hazard rate remains constant. In other words, the probabilities of each individual system failing do not change over time as the systems age - they are "memory-less". === Other models === For many systems, a constant hazard function may not be a realistic approximation; the chance of failure of an individual component may depend on its age. Therefore, other distributions are often used. For example, the deterministic distribution increases hazard rate over time (for systems where wear-out is the most important factor), while the Pareto distribution decreases it (for systems where early-life failures are more common). The commonly-used Weibull distribution combines both of these effects, as do the log-normal and hypertabastic distributions. After modelling a given distribution and parameters for h ( t ) {\displaystyle h(t)} , the failure probability density f ( t ) {\displaystyle f(t)} and cumulative failure distribution F ( t ) {\displaystyle F(t)} can be predicted using the given equations. == Measuring failure rate == Failure rate data can be obtained in several ways. The most common means are: Estimation From field failure rate reports, statistical analysis techniques can be used to estimate failure rates. For accurate failure rates the analyst must have a good understanding of equipment operation, procedures for data collection, the key environmental variables impacting failure rates, how the equipment is used at the system level, and how the failure data will be used by system designers. Historical data about the device or system under consideration Many organizations maintain internal databases of failure information on the devices or systems that they produce, which can be used to calculate failure rates for those devices or systems. For new devices or systems, the historical data for similar devices or systems can serve as a useful estimate. Government and commercial failure rate data Handbooks of failure rate data for various components are available from government and commercial sources. MIL-HDBK-217F, Reliability Prediction of Electronic Equipment, is a military standard that provides failure rate data for many military electronic components. Several failure rate data sources are available commercially that focus on commercial components, including some non-electronic components. Prediction Time lag is one of the serious drawbacks of all failure rate estimations. Often by the time the failure rate data are available, the devices under study have become obsolete. Due to this drawback, failure-rate prediction methods have been developed. These methods may be used on newly designed devices to predict the device's failure rates and failure modes. Two approaches have become well known, Cycle Testing and FMEDA. Life Testing The most accurate source of data is to test samples of the actual devices or systems in order to generate failure data. This is often prohibitively expensive or impractical, so that the previous data sources are often used instead. Cycle Testing Mechanical movement is the predominant failure mechanism causing mechanical and electromechanical devices to wear out. For many devices, the wear-out failure point is measured by the number of cycles performed before the device fails, and can be discovered by cycle testing. In cycle testing, a device is cycled as rapidly as practical until it fails. When a collection of these devices are tested, the test will run until 10% of the units fail dangerously. FMEDA Failure modes, effects, and diagnostic analysis (FMEDA) is a systematic analysis technique to obtain subsystem / product level failure rates, failure modes and design strength. The FMEDA technique considers: All components of a design, The functionality of each component, The failure modes of each component, The effect of each component failure mode on the product functionality, The ability of any automatic diagnostics to detect the failure, The design strength (de-rating, safety factors) and The operational profile (environmental stress factors). Given a component database calibrated with field failure data that is reasonably accurate, the method can predict product level failure rate and failure mode data for a given application. The predictions have been shown to be more accurate than field warranty return analysis or even typical field failure analysis given that these methods depend on reports that typically do not have sufficient detail information in failure records. == Examples == === Decreasing failure rates === A decreasing failure rate describes cases where early-life failures are common and corresponds to the situation where h ( t ) {\displaystyle h(t)} is a decreasing function. This can describe, for example, the period of infant mortality in humans, or the early failure of a transistors due to manufacturing defects. Decreasing failure rates have been found in the lifetimes of spacecraft - Baker and Baker commenting that "those spacecraft that last, last on and on." The hazard rate of aircraft air conditioning systems was found to have an exponentially decreasing distribution. === Renewal processes === In special processes called renewal processes, where the time to recover from failure can be neglected, the likelihood of failure remains constant with respect to time. For a renewal process with DFR renewal function, inter-renewal times are concave. Brown conjectured the converse, that DFR is also necessary for the inter-renewal times to be concave, however it has been shown that this conjecture holds neither in the discrete case nor in the continuous case. === Coefficient of variation === When the failure rate is decreasing the coefficient of variation is ⩾ 1, and when the failure rate is increasing the coefficient of variation is ⩽ 1. Note that this result only holds when the failure rate is defined for all t ⩾ 0 and that the converse result (coefficient of variation determining nature of failure rate) does not hold. === Units === Failure rates can be expressed using any measure of time, but hours is the most common unit in practice. Other units, such as miles, revolutions, etc., can also be used in place of "time" units. Failure rates are often expressed in engineering notation as failures per million, or 10−6, especially for individual components, since their failure rates are often very low. The Failures In Time (FIT) rate of a device is the number of failures that can be expected in one billion (109) device-hours of operation (e.g. 1,000 devices for 1,000,000 hours, or 1,000,000 devices for 1,000 hours each, or some other combination). This term is used particularly by the semiconductor industry. === Combinations of failure types === If a complex system consists of many parts, and the failure of any single part means the failure of the entire system, then the total failure rate is simply the sum of the individual failure rates of its parts λ S = λ P 1 + λ P 2 + … {\displaystyle \lambda _{S}=\lambda _{P1}+\lambda _{P2}+\ldots } however, this assumes that the failure rate λ ( t ) {\displaystyle \lambda (t)} is constant, and that the units are consistent (e.g. failures per million hours), and not expressed as a ratio or as probability densities. This is useful to estimate the failure rate of a system when individual components or subsystems have already been tested. Adding "redundant" components to eliminate a single point of failure may thus actually increase the failure rate, however reduces the "mission failure" rate, or the "mean time between critical failures" (MTBCF). Combining failure or hazard rates that are time-dependent is more complicated. For example, mixtures of Decreasing Failure Rate (DFR) variables are also DFR. Mixtures of exponentially distributed failure rates are hyperexponentially distributed. === Simple example === Suppose it is desired to estimate the failure rate of a certain component. Ten identical components are each tested until they either fail or reach 1,000 hours, at which time the test is terminated. A total of 7,502 component-hours of testing is performed, and 6 failures are recorded. The estimated failure rate is: 6 failures 7502 hours = 0.0007998 failures hour {\displaystyle {\frac {6{\text{ failures}}}{7502{\text{ hours}}}}=0.0007998\,{\frac {\text{failures}}{\text{hour}}}} which could also be expressed as a MTBF of 1,250 hours, or approximately 800 failures for every million hours of operation. == See also == == References == == Further reading == Goble, William M. (2018), Safety Instrumented System Design: Techniques and Design Verification, Research Triangle Park, NC: International Society of Automation Blanchard, Benjamin S. (1992). Logistics Engineering and Management (Fourth ed.). Englewood Cliffs, New Jersey: Prentice-Hall. pp. 26–32. ISBN 0135241170. Ebeling, Charles E. (1997). An Introduction to Reliability and Maintainability Engineering. Boston: McGraw-Hill. pp. 23–32. ISBN 0070188521. Federal Standard 1037C Kapur, K. C.; Lamberson, L. R. (1977). Reliability in Engineering Design. New York: John Wiley & Sons. pp. 8–30. ISBN 0471511919. Knowles, D. I. (1995). "Should We Move Away From 'Acceptable Failure Rate'?". Communications in Reliability Maintainability and Supportability. 2 (1). International RMS Committee, USA: 23. Modarres, M.; Kaminskiy, M.; Krivtsov, V. (2010). Reliability Engineering and Risk Analysis: A Practical Guide (2nd ed.). CRC Press. ISBN 9780849392474. Mondro, Mitchell J. (June 2002). "Approximation of Mean Time Between Failure When a System has Periodic Maintenance" (PDF). IEEE Transactions on Reliability. 51 (2): 166–167. doi:10.1109/TR.2002.1011521. Rausand, M.; Hoyland, A. (2004). System Reliability Theory; Models, Statistical methods, and Applications. New York: John Wiley & Sons. ISBN 047147133X. Turner, T.; Hockley, C.; Burdaky, R. (1997). The Customer Needs A Maintenance-Free Operating Period. Leatherhead, Surrey, UK: ERA Technology Ltd. {{cite book}}: |work= ignored (help) U.S. Department of Defense, (1991) Military Handbook, “Reliability Prediction of Electronic Equipment, MIL-HDBK-217F, 2 == External links == Bathtub curve issues Archived 2014-11-29 at the Wayback Machine, ASQC Fault Tolerant Computing in Industrial Automation Archived 2014-03-26 at the Wayback Machine by Hubert Kirrmann, ABB Research Center, Switzerland
Wikipedia/Failure_rate
In statistics, an empirical distribution function (a.k.a. an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. == Definition == Let (X1, …, Xn) be independent, identically distributed real random variables with the common cumulative distribution function F(t). Then the empirical distribution function is defined as F ^ n ( t ) = number of elements in the sample ≤ t n = 1 n ∑ i = 1 n 1 X i ≤ t , {\displaystyle {\widehat {F}}_{n}(t)={\frac {{\mbox{number of elements in the sample}}\leq t}{n}}={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t},} where 1 A {\displaystyle \mathbf {1} _{A}} is the indicator of event A. For a fixed t, the indicator 1 X i ≤ t {\displaystyle \mathbf {1} _{X_{i}\leq t}} is a Bernoulli random variable with parameter p = F(t); hence n F ^ n ( t ) {\displaystyle n{\widehat {F}}_{n}(t)} is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that F ^ n ( t ) {\displaystyle {\widehat {F}}_{n}(t)} is an unbiased estimator for F(t). However, in some textbooks, the definition is given as F ^ n ( t ) = 1 n + 1 ∑ i = 1 n 1 X i ≤ t {\displaystyle {\widehat {F}}_{n}(t)={\frac {1}{n+1}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t}} == Asymptotic properties == Since the ratio (n + 1)/n approaches 1 as n goes to infinity, the asymptotic properties of the two definitions that are given above are the same. By the strong law of large numbers, the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} converges to F(t) as n → ∞ almost surely, for every value of t: F ^ n ( t ) → a.s. F ( t ) ; {\displaystyle {\widehat {F}}_{n}(t)\ {\xrightarrow {\text{a.s.}}}\ F(t);} thus the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t: ‖ F ^ n − F ‖ ∞ ≡ sup t ∈ R | F ^ n ( t ) − F ( t ) | → 0. {\displaystyle \|{\widehat {F}}_{n}-F\|_{\infty }\equiv \sup _{t\in \mathbb {R} }{\big |}{\widehat {F}}_{n}(t)-F(t){\big |}\ \xrightarrow {} \ 0.} The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} and the assumed true cumulative distribution function F. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic. The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} has asymptotically normal distribution with the standard n {\displaystyle {\sqrt {n}}} rate of convergence: n ( F ^ n ( t ) − F ( t ) ) → d N ( 0 , F ( t ) ( 1 − F ( t ) ) ) . {\displaystyle {\sqrt {n}}{\big (}{\widehat {F}}_{n}(t)-F(t){\big )}\ \ {\xrightarrow {d}}\ \ {\mathcal {N}}{\Big (}0,F(t){\big (}1-F(t){\big )}{\Big )}.} This result is extended by the Donsker’s theorem, which asserts that the empirical process n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} , viewed as a function indexed by t ∈ R {\displaystyle \scriptstyle t\in \mathbb {R} } , converges in distribution in the Skorokhod space D [ − ∞ , + ∞ ] {\displaystyle \scriptstyle D[-\infty ,+\infty ]} to the mean-zero Gaussian process G F = B ∘ F {\displaystyle \scriptstyle G_{F}=B\circ F} , where B is the standard Brownian bridge. The covariance structure of this Gaussian process is E ⁡ [ G F ( t 1 ) G F ( t 2 ) ] = F ( t 1 ∧ t 2 ) − F ( t 1 ) F ( t 2 ) . {\displaystyle \operatorname {E} [\,G_{F}(t_{1})G_{F}(t_{2})\,]=F(t_{1}\wedge t_{2})-F(t_{1})F(t_{2}).} The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding: lim sup n → ∞ n ln 2 ⁡ n ‖ n ( F ^ n − F ) − G F , n ‖ ∞ < ∞ , a.s. {\displaystyle \limsup _{n\to \infty }{\frac {\sqrt {n}}{\ln ^{2}n}}{\big \|}{\sqrt {n}}({\widehat {F}}_{n}-F)-G_{F,n}{\big \|}_{\infty }<\infty ,\quad {\text{a.s.}}} Alternatively, the rate of convergence of n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} : Pr ( n ‖ F ^ n − F ‖ ∞ > z ) ≤ 2 e − 2 z 2 . {\displaystyle \Pr \!{\Big (}{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }>z{\Big )}\leq 2e^{-2z^{2}}.} In fact, Kolmogorov has shown that if the cumulative distribution function F is continuous, then the expression n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} converges in distribution to ‖ B ‖ ∞ {\displaystyle \scriptstyle \|B\|_{\infty }} , which has the Kolmogorov distribution that does not depend on the form of F. Another result, which follows from the law of the iterated logarithm, is that lim sup n → ∞ n ‖ F ^ n − F ‖ ∞ 2 ln ⁡ ln ⁡ n ≤ 1 2 , a.s. {\displaystyle \limsup _{n\to \infty }{\frac {{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }}{\sqrt {2\ln \ln n}}}\leq {\frac {1}{2}},\quad {\text{a.s.}}} and lim inf n → ∞ 2 n ln ⁡ ln ⁡ n ‖ F ^ n − F ‖ ∞ = π 2 , a.s. {\displaystyle \liminf _{n\to \infty }{\sqrt {2n\ln \ln n}}\|{\widehat {F}}_{n}-F\|_{\infty }={\frac {\pi }{2}},\quad {\text{a.s.}}} == Confidence intervals == As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, F ( x ) {\displaystyle F(x)} , with probability 1 − α {\displaystyle 1-\alpha } is specified as F n ( x ) − ε ≤ F ( x ) ≤ F n ( x ) + ε where ε = ln ⁡ 2 α 2 n . {\displaystyle F_{n}(x)-\varepsilon \leq F(x)\leq F_{n}(x)+\varepsilon \;{\text{ where }}\varepsilon ={\sqrt {\frac {\ln {\frac {2}{\alpha }}}{2n}}}.} As per the above bounds, we can plot the Empirical CDF, CDF and confidence intervals for different distributions by using any one of the statistical implementations. == Statistical implementation == A non-exhaustive list of software implementations of Empirical Distribution function includes: In R software, we compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an “ecdf” object. In MATLAB we can use Empirical cumulative distribution function (cdf) plot jmp from SAS, the CDF plot creates a plot of the empirical cumulative distribution function. Minitab, create an Empirical CDF Mathwave, we can fit probability distribution to our data Dataplot, we can plot Empirical CDF plot Scipy, we can use scipy.stats.ecdf Statsmodels, we can use statsmodels.distributions.empirical_distribution.ECDF Matplotlib, using the matplotlib.pyplot.ecdf function (new in version 3.8.0) Seaborn, using the seaborn.ecdfplot function Plotly, using the plotly.express.ecdf function Excel, we can plot Empirical CDF plot ArviZ, using the az.plot_ecdf function == See also == Càdlàg functions Count data Distribution fitting Dvoretzky–Kiefer–Wolfowitz inequality Empirical probability Empirical process Estimating quantiles from a sample Frequency (statistics) Empirical likelihood Kaplan–Meier estimator for censored processes Survival function Q–Q plot == References == == Further reading == Shorack, G.R.; Wellner, J.A. (1986). Empirical Processes with Applications to Statistics. New York: Wiley. ISBN 0-471-86725-X. == External links == Media related to Empirical distribution functions at Wikimedia Commons
Wikipedia/Empirical_distribution_function
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. The hazard rate at time t {\displaystyle t} is the probability per short time dt that an event will occur between t {\displaystyle t} and t + d t {\displaystyle t+dt} given that up to time t {\displaystyle t} no event has occurred yet. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed, may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated). == Background == Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding. The proportional hazards condition states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time t {\displaystyle t} , while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of λ 0 ( t ) {\displaystyle \lambda _{0}(t)} . The covariate is not restricted to binary predictors; in the case of a continuous covariate x {\displaystyle x} , it is typically assumed that the hazard responds exponentially; each unit increase in x {\displaystyle x} results in proportional scaling of the hazard. == The Cox model == === Introduction === Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted β i {\displaystyle \beta _{i}} below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky. Let Xi = (Xi1, … , Xip) be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form λ ( t | X i ) = λ 0 ( t ) exp ⁡ ( β 1 X i 1 + ⋯ + β p X i p ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) {\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\end{aligned}}} This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard λ 0 ( t ) {\displaystyle \lambda _{0}(t)} is identical (has no dependency on i). The only difference between subjects' hazards comes from the baseline scaling factor exp ⁡ ( X i ⋅ β ) {\displaystyle \exp(X_{i}\cdot \beta )} . === Why it is called "proportional" === To start, suppose we only have a single covariate, x {\displaystyle x} , and therefore a single coefficient, β 1 {\displaystyle \beta _{1}} . Our model looks like: λ ( t | x ) = λ 0 ( t ) exp ⁡ ( β 1 x ) {\displaystyle \lambda (t|x)=\lambda _{0}(t)\exp(\beta _{1}x)} Consider the effect of increasing x {\displaystyle x} by 1: λ ( t | x + 1 ) = λ 0 ( t ) exp ⁡ ( β 1 ( x + 1 ) ) = λ 0 ( t ) exp ⁡ ( β 1 x + β 1 ) = ( λ 0 ( t ) exp ⁡ ( β 1 x ) ) exp ⁡ ( β 1 ) = λ ( t | x ) exp ⁡ ( β 1 ) {\displaystyle {\begin{aligned}\lambda (t|x+1)&=\lambda _{0}(t)\exp(\beta _{1}(x+1))\\&=\lambda _{0}(t)\exp(\beta _{1}x+\beta _{1})\\&={\Bigl (}\lambda _{0}(t)\exp(\beta _{1}x){\Bigr )}\exp(\beta _{1})\\&=\lambda (t|x)\exp(\beta _{1})\end{aligned}}} We can see that increasing a covariate by 1 scales the original hazard by the constant exp ⁡ ( β 1 ) {\displaystyle \exp(\beta _{1})} . Rearranging things slightly, we see that: λ ( t | x + 1 ) λ ( t | x ) = exp ⁡ ( β 1 ) {\displaystyle {\frac {\lambda (t|x+1)}{\lambda (t|x)}}=\exp(\beta _{1})} The right-hand-side is constant over time (no term has a t {\displaystyle t} in it). This relationship, x / y = constant {\displaystyle x/y={\text{constant}}} , is called a proportional relationship. More generally, consider two subjects, i and j, with covariates X i {\displaystyle X_{i}} and X j {\displaystyle X_{j}} respectively. Consider the ratio of their hazards: λ ( t | X i ) λ ( t | X j ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) λ 0 ( t ) exp ⁡ ( X j ⋅ β ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) λ 0 ( t ) exp ⁡ ( X j ⋅ β ) = exp ⁡ ( ( X i − X j ) ⋅ β ) {\displaystyle {\begin{aligned}{\frac {\lambda (t|X_{i})}{\lambda (t|X_{j})}}&={\frac {\lambda _{0}(t)\exp(X_{i}\cdot \beta )}{\lambda _{0}(t)\exp(X_{j}\cdot \beta )}}\\&={\frac {{\cancel {\lambda _{0}(t)}}\exp(X_{i}\cdot \beta )}{{\cancel {\lambda _{0}(t)}}\exp(X_{j}\cdot \beta )}}\\&=\exp((X_{i}-X_{j})\cdot \beta )\end{aligned}}} The right-hand-side isn't dependent on time, as the only time-dependent factor, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , was cancelled out. Thus the ratio of hazards of two subjects is a constant, i.e. the hazards are proportional. === Absence of an intercept term === Often there is an intercept term (also called a constant term or bias term) used in regression models. The Cox model lacks one because the baseline hazard, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , takes the place of it. Let's see what would happen if we did include an intercept term anyways, denoted β 0 {\displaystyle \beta _{0}} : λ ( t | X i ) = λ 0 ( t ) exp ⁡ ( β 1 X i 1 + ⋯ + β p X i p + β 0 ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) exp ⁡ ( β 0 ) = ( exp ⁡ ( β 0 ) λ 0 ( t ) ) exp ⁡ ( X i ⋅ β ) = λ 0 ∗ ( t ) exp ⁡ ( X i ⋅ β ) {\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}+\beta _{0})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\exp(\beta _{0})\\&=\left(\exp(\beta _{0})\lambda _{0}(t)\right)\exp(X_{i}\cdot \beta )\\&=\lambda _{0}^{*}(t)\exp(X_{i}\cdot \beta )\end{aligned}}} where we've redefined exp ⁡ ( β 0 ) λ 0 ( t ) {\displaystyle \exp(\beta _{0})\lambda _{0}(t)} to be a new baseline hazard, λ 0 ∗ ( t ) {\displaystyle \lambda _{0}^{*}(t)} . Thus, the baseline hazard incorporates all parts of the hazard that are not dependent on the subjects' covariates, which includes any intercept term (which is constant for all subjects, by definition). In other words, adding an intercept term would make the model unidentifiable. === Likelihood for unique times === The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". It is simply the probability for subjects to have experienced events in the order that they actually have occurred, given the set of times of occurrences and given the subjects' covariates. The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios. To calculate the partial likelihood, the probability for the order of events, let us index the M samples for which events have already occurred by increasing time of occurrence, Y1 < Y2 < ... < YM. Covariates of all other subjects for which no event has occurred get indices M+1,.., N. The partial likelihood can be factorized into one factor for each event that has occurred. The i 'th factor is the probability that out of all subjects (i,i+1,..., N) for which no event has occurred before time Yi, the one that actually occurred at time Yi is the event for subject i: L i ( β ) = λ ( Y i ∣ X i ) ∑ j = i N λ ( Y i ∣ X j ) = λ 0 ( Y i ) θ i ∑ j = i N λ 0 ( Y i ) θ j = θ i ∑ j = i N θ j , {\displaystyle L_{i}(\beta )={\frac {\lambda (Y_{i}\mid X_{i})}{\sum _{j=i}^{N}\lambda (Y_{i}\mid X_{j})}}={\frac {\lambda _{0}(Y_{i})\theta _{i}}{\sum _{j=i}^{N}\lambda _{0}(Y_{i})\theta _{j}}}={\frac {\theta _{i}}{\sum _{j=i}^{N}\theta _{j}}},} where θj = exp(Xj ⋅ β) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β) ≤ 1. Treating the subjects as statistically independent of each other, the partial likelihood for the order of events is L ( β ) = ∏ i = 1 M L i ( β ) = ∏ i : C i = 1 L i ( β ) , {\displaystyle L(\beta )=\prod _{i=1}^{M}L_{i}(\beta )=\prod _{i:C_{i}=1}L_{i}(\beta ),} where the subjects for which an event has occurred are indicated by Ci = 1 and all others by Ci = 0. The corresponding log partial likelihood is ℓ ( β ) = ∑ i : C i = 1 ( X i ⋅ β − log ⁡ ∑ j : Y j ≥ Y i θ j ) , {\displaystyle \ell (\beta )=\sum _{i:C_{i}=1}\left(X_{i}\cdot \beta -\log \sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right),} where we have written ∑ j = i N {\displaystyle \sum _{j=i}^{N}} using the indexing introduced above in a more general way, as ∑ j : Y j ≥ Y i {\displaystyle \sum _{j:Y_{j}\geq Y_{i}}} . Crucially, the effect of the covariates can be estimated without the need to specify the hazard function λ 0 ( t ) {\displaystyle \lambda _{0}(t)} over time. The partial likelihood can be maximized over β to produce maximum partial likelihood estimates of the model parameters. The partial score function is ℓ ′ ( β ) = ∑ i : C i = 1 ( X i − ∑ j : Y j ≥ Y i θ j X j ∑ j : Y j ≥ Y i θ j ) , {\displaystyle \ell ^{\prime }(\beta )=\sum _{i:C_{i}=1}\left(X_{i}-{\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}\right),} and the Hessian matrix of the partial log likelihood is ℓ ′ ′ ( β ) = − ∑ i : C i = 1 ( ∑ j : Y j ≥ Y i θ j X j X j ′ ∑ j : Y j ≥ Y i θ j − [ ∑ j : Y j ≥ Y i θ j X j ] [ ∑ j : Y j ≥ Y i θ j X j ′ ] [ ∑ j : Y j ≥ Y i θ j ] 2 ) . {\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{i:C_{i}=1}\left({\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}X_{j}^{\prime }}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}-{\frac {\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}\right]\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}^{\prime }\right]}{\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right]^{2}}}\right).} Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients. === Likelihood when there exist tied times === Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method. Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood. L ( β ) = ∏ j ∏ i ∈ H j θ i ∏ ℓ = 0 m j − 1 [ ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ] . {\displaystyle L(\beta )=\prod _{j}{\frac {\prod _{i\in H_{j}}\theta _{i}}{\prod _{\ell =0}^{m_{j}-1}\left[\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right]}}.} The corresponding log partial likelihood is ℓ ( β ) = ∑ j ( ∑ i ∈ H j X i ⋅ β − ∑ ℓ = 0 m j − 1 log ⁡ ( ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) ) , {\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right),} the score function is ℓ ′ ( β ) = ∑ j ( ∑ i ∈ H j X i − ∑ ℓ = 0 m j − 1 ∑ i : Y i ≥ t j θ i X i − ℓ m j ∑ i ∈ H j θ i X i ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) , {\displaystyle \ell ^{\prime }(\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}-\sum _{\ell =0}^{m_{j}-1}{\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}}{\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}}\right),} and the Hessian matrix is ℓ ′ ′ ( β ) = − ∑ j ∑ ℓ = 0 m j − 1 ( ∑ i : Y i ≥ t j θ i X i X i ′ − ℓ m j ∑ i ∈ H j θ i X i X i ′ ϕ j , ℓ , m j − Z j , ℓ , m j Z j , ℓ , m j ′ ϕ j , ℓ , m j 2 ) , {\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{j}\sum _{\ell =0}^{m_{j}-1}\left({\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}X_{i}^{\prime }-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}X_{i}^{\prime }}{\phi _{j,\ell ,m_{j}}}}-{\frac {Z_{j,\ell ,m_{j}}Z_{j,\ell ,m_{j}}^{\prime }}{\phi _{j,\ell ,m_{j}}^{2}}}\right),} where ϕ j , ℓ , m j = ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i {\displaystyle \phi _{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}} Z j , ℓ , m j = ∑ i : Y i ≥ t j θ i X i − ℓ m j ∑ i ∈ H j θ i X i . {\displaystyle Z_{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}.} Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero. === Examples === Below are some worked examples of the Cox model in practice. ==== A single binary covariate ==== Suppose the endpoint we are interested in is patient survival during a 5-year observation period after a surgery. Patients can die within the 5-year period, and we record when they died, or patients can live past 5 years, and we only record that they lived past 5 years. The surgery was performed at one of two hospitals, A or B, and we would like to know if the hospital location is associated with 5-year survival. Specifically, we would like to know the relative increase (or decrease) in hazard from a surgery performed at hospital A compared to hospital B. Provided is some (fake) data, where each row represents a patient: T is how long the patient was observed for before death or 5 years (measured in months), and C denotes if the patient died in the 5-year period. We have encoded the hospital as a binary variable denoted X: 1 if from hospital A, 0 from hospital B. Our single-covariate Cox proportional model looks like the following, with β 1 {\displaystyle \beta _{1}} representing the hospital's effect, and i indexing each patient: λ ( t | X i ) ⏞ hazard for i = λ 0 ( t ) ⏟ baseline hazard ⋅ exp ⁡ ( β 1 X i ) ⏞ scaling factor for i {\displaystyle \overbrace {\lambda (t|X_{i})} ^{\text{hazard for i}}=\underbrace {\lambda _{0}(t)} _{{\text{baseline}} \atop {\text{hazard}}}\cdot \overbrace {\exp(\beta _{1}X_{i})} ^{\text{scaling factor for i}}} Using statistical software, we can estimate β 1 {\displaystyle \beta _{1}} to be 2.12. The hazard ratio is the exponential of this value, exp ⁡ ( β 1 ) = exp ⁡ ( 2.12 ) {\displaystyle \exp(\beta _{1})=\exp(2.12)} . To see why, consider the ratio of hazards, specifically: λ ( t | X = 1 ) λ ( t | X = 0 ) = λ 0 ( t ) exp ⁡ ( β 1 ⋅ 1 ) λ 0 ( t ) exp ⁡ ( β 1 ⋅ 0 ) = exp ⁡ ( β 1 ) {\displaystyle {\frac {\lambda (t|X=1)}{\lambda (t|X=0)}}={\frac {{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 1)}{{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 0)}}=\exp(\beta _{1})} Thus, the hazard ratio of hospital A to hospital B is exp ⁡ ( 2.12 ) = 8.32 {\displaystyle \exp(2.12)=8.32} . Putting aside statistical significance for a moment, we can make a statement saying that patients in hospital A are associated with a 8.3x higher risk of death occurring in any short period of time compared to hospital B. There are important caveats to mention about the interpretation: a 8.3x higher risk of death does not mean that 8.3x more patients will die in hospital A: survival analysis examines how quickly events occur, not simply whether they occur. More specifically, "risk of death" is a measure of a rate. A rate has units, like meters per second. However, a relative rate does not: a bicycle can go two times faster than another bicycle (the reference bicycle), without specifying any units. Likewise, the risk of death (comparable to the speed of a bike) in hospital A is 8.3 times higher (faster) than the risk of death in hospital B (the reference group). the inverse quantity, 1 / 8.32 = 1 exp ⁡ ( 2.12 ) = exp ⁡ ( − 2.12 ) = 0.12 {\displaystyle 1/8.32={\frac {1}{\exp(2.12)}}=\exp(-2.12)=0.12} is the hazard ratio of hospital B relative to hospital A. We haven't made any inferences about probabilities of survival between the hospitals. This is because we would need an estimate of the baseline hazard rate, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , as well as our β 1 {\displaystyle \beta _{1}} estimate. However, standard estimation of the Cox proportional hazard model does not directly estimate the baseline hazard rate. Because we have ignored the only time varying component of the model, the baseline hazard rate, our estimate is timescale-invariant. For example, if we had measured time in years instead of months, we would get the same estimate. It is tempting to say that the hospital caused the difference in hazards between the two groups, but since our study is not causal (that is, we do not know how the data was generated), we stick with terminology like "associated". ==== A single continuous covariate ==== To demonstrate a less traditional use case of survival analysis, the next example will be an economics question: what is the relationship between a company's price-to-earnings ratio (P/E) on their first IPO anniversary and their future survival? More specifically, if we consider a company's "birth event" to be their first IPO anniversary, and any bankruptcy, sale, going private, etc. as a "death" event the company, we'd like to know the influence of the companies' P/E ratio at their "birth" (first IPO anniversary) on their survival. Provided is a (fake) dataset with survival data from 12 companies: T represents the number of days between first IPO anniversary and death (or an end date of 2022-01-01, if did not die). C represents if the company died before 2022-01-01 or not. P/E represents the company's price-to-earnings ratio at its 1st IPO anniversary. Unlike the previous example where there was a binary variable, this dataset has a continuous variable, P/E; however, the model looks similar: λ ( t | P i ) = λ 0 ( t ) ⋅ exp ⁡ ( β 1 P i ) {\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(\beta _{1}P_{i})} where P i {\displaystyle P_{i}} represents a company's P/E ratio. Running this dataset through a Cox model produces an estimate of the value of the unknown β 1 {\displaystyle \beta _{1}} , which is -0.34. Therefore, an estimate of the entire hazard is: λ ( t | P i ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P i ) {\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(-0.34P_{i})} Since the baseline hazard, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , was not estimated, the entire hazard is not able to be calculated. However, consider the ratio of the companies i and j's hazards: λ ( t | P i ) λ ( t | P j ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P i ) λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P j ) = exp ⁡ ( − 0.34 ( P i − P j ) ) {\displaystyle {\begin{aligned}{\frac {\lambda (t|P_{i})}{\lambda (t|P_{j})}}&={\frac {{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{i})}{{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{j})}}\\&=\exp(-0.34(P_{i}-P_{j}))\end{aligned}}} All terms on the right are known, so calculating the ratio of hazards between companies is possible. Since there is no time-dependent term on the right (all terms are constant), the hazards are proportional to each other. For example, the hazard ratio of company 5 to company 2 is exp ⁡ ( − 0.34 ( 6.3 − 3.0 ) ) = 0.33 {\displaystyle \exp(-0.34(6.3-3.0))=0.33} . This means that, within the interval of study, company 5's risk of "death" is 0.33 ≈ 1/3 as large as company 2's risk of death. There are important caveats to mention about the interpretation: The hazard ratio is the quantity exp ⁡ ( β 1 ) {\displaystyle \exp(\beta _{1})} , which is exp ⁡ ( − 0.34 ) = 0.71 {\displaystyle \exp(-0.34)=0.71} in the above example. From the last calculation above, an interpretation of this is as the ratio of hazards between two "subjects" that have their variables differ by one unit: if P i = P j + 1 {\displaystyle P_{i}=P_{j}+1} , then exp ⁡ ( β 1 ( P i − P j ) = exp ⁡ ( β 1 ( 1 ) ) {\displaystyle \exp(\beta _{1}(P_{i}-P_{j})=\exp(\beta _{1}(1))} . The choice of "differ by one unit" is convenience, as it communicates precisely the value of β 1 {\displaystyle \beta _{1}} . The baseline hazard can be represented when the scaling factor is 1, i.e. P = 0 {\displaystyle P=0} . λ ( t | P i = 0 ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 ⋅ 0 ) = λ 0 ( t ) {\displaystyle \lambda (t|P_{i}=0)=\lambda _{0}(t)\cdot \exp(-0.34\cdot 0)=\lambda _{0}(t)} Can we interpret the baseline hazard as the hazard of a "baseline" company whose P/E happens to be 0? This interpretation of the baseline hazard as "hazard of a baseline subject" is imperfect, as the covariate being 0 is impossible in this application: a P/E of 0 is meaningless (it means the company's stock price is 0, i.e., they are "dead"). A more appropriate interpretation would be "the hazard when all variables are nil". It is tempting to want to understand and interpret a value like exp ⁡ ( β 1 P i ) {\displaystyle \exp(\beta _{1}P_{i})} to represent the hazard of a company. However, consider what this is actually representing: exp ⁡ ( β 1 P i ) = exp ⁡ ( β 1 ( P i − 0 ) ) = exp ⁡ ( β 1 P i ) exp ⁡ ( β 1 0 ) = λ ( t | P i ) λ ( t | 0 ) {\displaystyle \exp(\beta _{1}P_{i})=\exp(\beta _{1}(P_{i}-0))={\frac {\exp(\beta _{1}P_{i})}{\exp(\beta _{1}0)}}={\frac {\lambda (t|P_{i})}{\lambda (t|0)}}} . There is implicitly a ratio of hazards here, comparing company i's hazard to an imaginary baseline company with 0 P/E. However, as explained above, a P/E of 0 is impossible in this application, so exp ⁡ ( β 1 P i ) {\displaystyle \exp(\beta _{1}P_{i})} is meaningless in this example. Ratios between plausible hazards are meaningful, however. == Time-varying predictors and coefficients == Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill. One example of the use of hazard models with time-varying regressors is estimating the effect of unemployment insurance on unemployment spells. In addition to allowing time-varying covariates (i.e., predictors), the Cox model may be generalized to time-varying coefficients as well. That is, the proportional effect of a treatment may vary with time; e.g. a drug may be very effective if administered within one month of morbidity, and become less effective as time goes on. The hypothesis of no change with time (stationarity) of the coefficient may then be tested. Details and software (R package) are available in Martinussen and Scheike (2006). In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards, i.e. specifying λ ( t | X i ) = λ 0 ( t ) + β 1 X i 1 + ⋯ + β p X i p = λ 0 ( t ) + X i ⋅ β . {\displaystyle \lambda (t|X_{i})=\lambda _{0}(t)+\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}=\lambda _{0}(t)+X_{i}\cdot \beta .} If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict λ ( t ∣ X i ) {\displaystyle \lambda (t\mid X_{i})} to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required. == Specifying the baseline hazard function == The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard λ 0 ( t ) {\displaystyle \lambda _{0}(t)} is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model. Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models. The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast. Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function, to acknowledge the debt of the entire field to David Cox. The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model. == Relationship to Poisson models == There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981) provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models. == Under high-dimensional setup == In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint. ℓ ( β ) = ∑ j ( ∑ i ∈ H j X i ⋅ β − ∑ ℓ = 0 m j − 1 log ⁡ ( ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) ) + λ ‖ β ‖ 1 , {\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right)+\lambda \|\beta \|_{1},} There has been theoretical progress on this topic recently. == Software implementations == Mathematica: CoxModelFit function. R: coxph() function, located in the survival package. SAS: phreg procedure Stata: stcox command Python: CoxPHFitter located in the lifelines library. phreg in the statsmodels library. SPSS: Available under Cox Regression. MATLAB: fitcox or coxphfit function Julia: Available in the Survival.jl library. JMP: Available in Fit Proportional Hazards platform. Prism: Available in Survival Analyses and Multiple Variable Analyses == See also == Accelerated failure time model One in ten rule Weibull distribution Hypertabastic distribution == Notes == == References ==
Wikipedia/Proportional_hazards_model
The survival function is a function that gives the probability that a patient, device, or other object of interest will survive past a certain time. The survival function is also known as the survivor function or reliability function. The term reliability function is common in engineering while the term survival function is used in a broader range of applications, including human mortality. The survival function is the complementary cumulative distribution function of the lifetime. Sometimes complementary cumulative distribution functions are called survival functions in general. == Definition == Let the lifetime T {\displaystyle T} be a continuous random variable describing the time to failure. If T {\displaystyle T} has cumulative distribution function F ( t ) {\displaystyle F(t)} and probability density function f ( t ) {\displaystyle f(t)} on the interval [ 0 , ∞ ) {\displaystyle [0,\infty )} , then the survival function or reliability function is: S ( t ) = P ( T > t ) = 1 − F ( t ) = 1 − ∫ 0 t f ( u ) d u {\displaystyle S(t)=P(T>t)=1-F(t)=1-\int _{0}^{t}f(u)\,du} == Examples of survival functions == The graphs below show examples of hypothetical survival functions. The x-axis is time. The y-axis is the proportion of subjects surviving. The graphs show the probability that a subject will survive beyond time t. For example, for survival function 1, the probability of surviving longer than t = 2 months is 0.37. That is, 37% of subjects survive more than 2 months. For survival function 2, the probability of surviving longer than t = 2 months is 0.97. That is, 97% of subjects survive more than 2 months. Median survival may be determined from the survival function: The median survival is the point where the survival function intersects the value 0.5. For example, for survival function 2, 50% of the subjects survive 3.72 months. Median survival is thus 3.72 months. Median survival cannot always be determined from the graph alone. For example, in survival function 4, more than 50% of the subjects survive longer than the observation period of 10 months. The survival function is one of several ways to describe and display survival data. Another useful way to display data is a graph showing the distribution of survival times of subjects. Olkin, page 426, gives the following example of survival data. The number of hours between successive failures of an air-conditioning (AC) system were recorded. The time in hours, t, between successive failures are 1, 3, 5, 7, 11, 11, 11, 12, 14, 14, 14, 16, 16, 20, 21, 23, 42, 47, 52, 62, 71, 71, 87, 90, 95, 120, 120, 225, 246 and 261. The mean time between failures is 59.6. The figure below shows the distribution of the time between failures. The blue tick marks beneath the graph are the actual hours between successive AC failures. In this example, a curve representing the exponential distribution overlays the distribution of AC failure times; the exponential distribution approximates the distribution of AC failure times. This particular exponential curve is specified by the parameter lambda, λ: λ = 1/(mean time between failures) = 1/59.6 = 0.0168. The distribution of failure times is the probability density function (PDF), since time can take any positive value. In equations, the PDF is specified as fT. If time can only take discrete values (such as 1 day, 2 days, and so on), the distribution of failure times is called the probability mass function. Most survival analysis methods assume that time can take any positive value, and fT is the PDF. If the time between observed AC failures is approximated using the exponential function, then the exponential curve gives the probability density function, fT, for AC failure times. Another useful way to display the survival data is a graph showing the cumulative failures up to each time point. These data may be displayed as either the cumulative number or the cumulative proportion of failures up to each time. The graph below shows the cumulative probability (or proportion) of failures at each time for the air conditioning system. The stairstep line in black shows the cumulative proportion of failures. For each step there is a blue tick at the bottom of the graph indicating an observed failure time. The smooth red line represents the exponential curve fitted to the observed data. A graph of the cumulative probability of failures up to each time point is called the cumulative distribution function (CDF). In survival analysis, the cumulative distribution function gives the probability that the survival time is less than or equal to a specific time, t. Let T be survival time, which is any positive number. A particular time is designated by the lower case letter t. The cumulative distribution function of T is the function F ( t ) = P ⁡ ( T ≤ t ) , {\displaystyle F(t)=\operatorname {P} (T\leq t),} where the right-hand side represents the probability that the random variable T is less than or equal to t. If time can take on any positive value, then the cumulative distribution function F(t) is the integral of the probability density function f(t). For the air-conditioning example, the graph of the CDF below illustrates that the probability that the time to failure is less than or equal to 100 hours is 0.81, as estimated using the exponential curve fit to the data. An alternative to graphing the probability that the failure time is less than or equal to 100 hours is to graph the probability that the failure time is greater than 100 hours. The probability that the failure time is greater than 100 hours must be 1 minus the probability that the failure time is less than or equal to 100 hours, because total probability must sum to 1. This gives: P ( failure times > 100 hours ) = 1 − P ( failure times ≤ 100 hours ) = 1 − 0.81 = 0.19 {\displaystyle P({\text{failure times}}>100{\text{ hours}})=1-P({\text{failure times}}\leq 100{\text{ hours}})=1-0.81=0.19} This relationship generalizes to all failure times: P ( T > t ) = 1 − P ( T ≤ t ) = cumulative distribution function. {\displaystyle P(T>t)=1-P(T\leq t)={\text{ cumulative distribution function.}}} This relationship is shown on the graphs below. The graph on the left is the cumulative distribution function, which is P(T ≤ t). The graph on the right is P(T > t) = 1 - P(T ≤ t). The graph on the right is the survival function, S(t). The fact that the S(t) = 1 – CDF is the reason that another name for the survival function is the complementary cumulative distribution function. == Parametric survival functions == In some cases, such as the air conditioner example, the distribution of survival times may be approximated well by a function such as the exponential distribution. Several distributions are commonly used in survival analysis, including the exponential, Weibull, gamma, normal, log-normal, and log-logistic. These distributions are defined by parameters. The normal (Gaussian) distribution, for example, is defined by the two parameters mean and standard deviation. Survival functions that are defined by parameters are said to be parametric. In the four survival function graphs shown above, the shape of the survival function is defined by a particular probability distribution: survival function 1 is defined by an exponential distribution, 2 is defined by a Weibull distribution, 3 is defined by a log-logistic distribution, and 4 is defined by another Weibull distribution. === Exponential survival function === For an exponential survival distribution, the probability of failure is the same in every time interval, no matter the age of the individual or device. This fact leads to the "memoryless" property of the exponential survival distribution: the age of a subject has no effect on the probability of failure in the next time interval. The exponential may be a good model for the lifetime of a system where parts are replaced as they fail. It may also be useful for modeling survival of living organisms over short intervals. It is not likely to be a good model of the complete lifespan of a living organism. As Efron and Hastie (p. 134) note, "If human lifetimes were exponential there wouldn't be old or young people, just lucky or unlucky ones". === Weibull survival function === A key assumption of the exponential survival function is that the hazard rate is constant. In an example given above, the proportion of men dying each year was constant at 10%, meaning that the hazard rate was constant. The assumption of constant hazard may not be appropriate. For example, among most living organisms, the risk of death is greater in old age than in middle age – that is, the hazard rate increases with time. For some diseases, such as breast cancer, the risk of recurrence is lower after 5 years – that is, the hazard rate decreases with time. The Weibull distribution extends the exponential distribution to allow constant, increasing, or decreasing hazard rates. === Other parametric survival functions === There are several other parametric survival functions that may provide a better fit to a particular data set, including normal, lognormal, log-logistic, and gamma. The choice of parametric distribution for a particular application can be made using graphical methods or using formal tests of fit. These distributions and tests are described in textbooks on survival analysis. Lawless has extensive coverage of parametric models. Parametric survival functions are commonly used in manufacturing applications, in part because they enable estimation of the survival function beyond the observation period. However, appropriate use of parametric functions requires that data are well modeled by the chosen distribution. If an appropriate distribution is not available, or cannot be specified before a clinical trial or experiment, then non-parametric survival functions offer a useful alternative. == Non-parametric survival functions == A parametric model of survival may not be possible or desirable. In these situations, the most common method to model the survival function is the non-parametric Kaplan–Meier estimator. This estimator requires lifetime data. Periodic case (cohort) and death (and recovery) counts are statistically sufficient to make non-parametric maximum likelihood and least squares estimates of survival functions, without lifetime data. == Properties == Every survival function S ( t ) {\displaystyle S(t)} is monotonically decreasing, i.e. S ( u ) ≤ S ( t ) {\displaystyle S(u)\leq S(t)} for all u > t {\displaystyle u>t} . It is a property of a random variable that maps a set of events, usually associated with mortality or failure of some system, onto time. The time, t = 0 {\displaystyle t=0} , represents some origin, typically the beginning of a study or the start of operation of some system. S ( 0 ) {\displaystyle S(0)} is commonly unity but can be less to represent the probability that the system fails immediately upon operation. Since the CDF is a right-continuous function, the survival function S ( t ) = 1 − F ( t ) {\displaystyle S(t)=1-F(t)} is also right-continuous. The survival function can be related to the probability density function f ( t ) {\displaystyle f(t)} and the hazard function λ ( t ) {\displaystyle \lambda (t)} f ( t ) = − S ′ ( t ) {\displaystyle f(t)=-S'(t)} λ ( t ) = − d d t log ⁡ S ( t ) {\displaystyle \lambda (t)=-{d \over {dt}}\log S(t)} So that S ( t ) = exp ⁡ [ − ∫ 0 t λ ( t ′ ) d t ′ ] {\displaystyle S(t)=\exp[-\int _{0}^{t}\lambda (t')dt']} The expected survival time E ( T ) = ∫ 0 ∞ S ( t ) d t {\displaystyle \mathbb {E} (T)=\int _{0}^{\infty }S(t)dt} == See also == Failure rate Frequency of exceedance Kaplan–Meier estimator Mean time to failure Residence time (statistics) Survivorship curve == References ==
Wikipedia/Survival_function
Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded. == History of frequentist statistics == Frequentism is based on the presumption that statistics represent probabilistic frequencies. This view was primarily developed by Ronald Fisher and the team of Jerzy Neyman and Egon Pearson. Ronald Fisher contributed to frequentist statistics by developing the frequentist concept of "significance testing", which is the study of the significance of a measure of a statistic when compared to the hypothesis. Neyman-Pearson extended Fisher's ideas to apply to multiple hypotheses. They posed that the ratio of probabilities of two given hypotheses, when maximizing the difference between them, leads to a maximization of exceeding a given p-value. This relationship serves as the basis of type I and type II errors and confidence intervals. == Definition == For statistical inference, the statistic about which we want to make inferences is y ∈ Y {\displaystyle y\in Y} , where the random vector Y {\displaystyle Y} is a function of an unknown parameter, θ {\displaystyle \theta } . The parameter θ {\displaystyle \theta } , in turn, is partitioned into ( ψ , λ {\displaystyle \psi ,\lambda } ), where ψ {\displaystyle \psi } is the parameter of interest, and λ {\displaystyle \lambda } is the nuisance parameter. For concreteness, ψ {\displaystyle \psi } might be the population mean, μ {\displaystyle \mu } , and the nuisance parameter λ {\displaystyle \lambda } the standard deviation of the population mean, σ {\displaystyle \sigma } . Thus, statistical inference is concerned with the expectation of random vector Y {\displaystyle Y} , E ( Y ) = E ( Y ; θ ) = ∫ y f Y ( y ; θ ) d y {\displaystyle E(Y)=E(Y;\theta )=\int yf_{Y}(y;\theta )dy} . To construct areas of uncertainty in frequentist inference, a pivot is used which defines the area around ψ {\displaystyle \psi } that can be used to provide an interval to estimate uncertainty. The pivot is a probability such that for a pivot, p {\displaystyle p} , which is a function, that p ( t , ψ ) {\displaystyle p(t,\psi )} is strictly increasing in ψ {\displaystyle \psi } , where t ∈ T {\displaystyle t\in T} is a random vector. This allows that, for some 0 < c {\displaystyle c} < 1, we can define P { p ( T , ψ ) ≤ p c ∗ } {\displaystyle P\{p(T,\psi )\leq p_{c}^{*}\}} , which is the probability that the pivot function is less than some well-defined value. This implies P { ψ ≤ q ( T , c ) } = 1 − c {\displaystyle P\{\psi \leq q(T,c)\}=1-c} , where q ( t , c ) {\displaystyle q(t,c)} is a 1 − c {\displaystyle 1-c} upper limit for ψ {\displaystyle \psi } . Note that 1 − c {\displaystyle 1-c} is a range of outcomes that define a one-sided limit for ψ {\displaystyle \psi } , and that 1 − 2 c {\displaystyle 1-2c} is a two-sided limit for ψ {\displaystyle \psi } , when we want to estimate a range of outcomes where ψ {\displaystyle \psi } may occur. This rigorously defines the confidence interval, which is the range of outcomes about which we can make statistical inferences. == Fisherian reduction and Neyman-Pearson operational criteria == Two complementary concepts in frequentist inference are the Fisherian reduction and the Neyman-Pearson operational criteria. Together these concepts illustrate a way of constructing frequentist intervals that define the limits for ψ {\displaystyle \psi } . The Fisherian reduction is a method of determining the interval within which the true value of ψ {\displaystyle \psi } may lie, while the Neyman-Pearson operational criteria is a decision rule about making a priori probability assumptions. The Fisherian reduction is defined as follows: Determine the likelihood function (this is usually just gathering the data); Reduce to a sufficient statistic S {\displaystyle S} of the same dimension as θ {\displaystyle \theta } ; Find the function of S {\displaystyle S} that has a distribution depending only on ψ {\displaystyle \psi } ; Invert that distribution (this yields a cumulative distribution function or CDF) to obtain limits for ψ {\displaystyle \psi } at an arbitrary set of probability levels; Use the conditional distribution of the data given S = s {\displaystyle S=s} informally or formally as to assess the adequacy of the formulation. Essentially, the Fisherian reduction is design to find where the sufficient statistic can be used to determine the range of outcomes where ψ {\displaystyle \psi } may occur on a probability distribution that defines all the potential values of ψ {\displaystyle \psi } . This is necessary to formulating confidence intervals, where we can find a range of outcomes over which ψ {\displaystyle \psi } is likely to occur in the long-run. The Neyman-Pearon operational criteria is an even more specific understanding of the range of outcomes where the relevant statistic, ψ {\displaystyle \psi } , can be said to occur in the long run. The Neyman-Pearson operational criteria defines the likelihood of that range actually being adequate or of the range being inadequate. The Neyman-Pearson criteria defines the range of the probability distribution that, if ψ {\displaystyle \psi } exists in this range, is still below the true population statistic. For example, if the distribution from the Fisherian reduction exceeds a threshold that we consider to be a priori implausible, then the Neyman-Pearson reduction's evaluation of that distribution can be used to infer where looking purely at the Fisherian reduction's distributions can give us inaccurate results. Thus, the Neyman-Pearson reduction is used to find the probability of type I and type II errors. As a point of reference, the complement to this in Bayesian statistics is the minimum Bayes risk criterion. Because of the reliance of the Neyman-Pearson criteria on our ability to find a range of outcomes where ψ {\displaystyle \psi } is likely to occur, the Neyman-Pearson approach is only possible where a Fisherian reduction can be achieved. == Experimental design and methodology == Frequentist inferences are associated with the application frequentist probability to experimental design and interpretation, and specifically with the view that any given experiment can be considered one of an infinite sequence of possible repetitions of the same experiment, each capable of producing statistically independent results. In this view, the frequentist inference approach to drawing conclusions from data is effectively to require that the correct conclusion should be drawn with a given (high) probability, among this notional set of repetitions. However, exactly the same procedures can be developed under a subtly different formulation. This is one where a pre-experiment point of view is taken. It can be argued that the design of an experiment should include, before undertaking the experiment, decisions about exactly what steps will be taken to reach a conclusion from the data yet to be obtained. These steps can be specified by the scientist so that there is a high probability of reaching a correct decision where, in this case, the probability relates to a yet to occur set of random events and hence does not rely on the frequency interpretation of probability. This formulation has been discussed by Neyman, among others. This is especially pertinent because the significance of a frequentist test can vary under model selection, a violation of the likelihood principle. == The statistical philosophy of frequentism == Frequentism is the study of probability with the assumption that results occur with a given frequency over some period of time or with repeated sampling. As such, frequentist analysis must be formulated with consideration to the assumptions of the problem frequentism attempts to analyze. This requires looking into whether the question at hand is concerned with understanding variety of a statistic or locating the true value of a statistic. The difference between these assumptions is critical for interpreting a hypothesis test. There are broadly two camps of statistical inference, the epistemic approach and the epidemiological approach. The epistemic approach is the study of variability; namely, how often do we expect a statistic to deviate from some observed value. The epidemiological approach is concerned with the study of uncertainty; in this approach, the value of the statistic is fixed but our understanding of that statistic is incomplete. For concreteness, imagine trying to measure the stock market quote versus evaluating an asset's price. The stock market fluctuates so greatly that trying to find exactly where a stock price is going to be is not useful: the stock market is better understood using the epistemic approach, where we can try to quantify its fickle movements. Conversely, the price of an asset might not change that much from day to day: it is better to locate the true value of the asset rather than find a range of prices and thus the epidemiological approach is better. The difference between these approaches is non-trivial for the purposes of inference. For the epistemic approach, we formulate the problem as if we want to attribute probability to a hypothesis. This can only be done with Bayesian statistics, where the interpretation of probability is straightforward because Bayesian statistics is conditional on the entire sample space, whereas frequentist testing is concerned with the whole experimental design. Frequentist statistics is conditioned not on solely the data but also on the experimental design. In frequentist statistics, the cutoff for understanding the frequency occurrence is derived from the family distribution used in the experiment design. For example, a binomial distribution and a negative binomial distribution can be used to analyze exactly the same data, but because their tail ends are different the frequentist analysis will realize different levels of statistical significance for the same data that assumes different probability distributions. This difference does not occur in Bayesian inference. For more, see the likelihood principle, which frequentist statistics inherently violates. For the epidemiological approach, the central idea behind frequentist statistics must be discussed. Frequentist statistics is designed so that, in the long-run, the frequency of a statistic may be understood, and in the long-run the range of the true mean of a statistic can be inferred. This leads to the Fisherian reduction and the Neyman-Pearson operational criteria, discussed above. When we define the Fisherian reduction and the Neyman-Pearson operational criteria for any statistic, we are assessing, according to these authors, the likelihood that the true value of the statistic will occur within a given range of outcomes assuming a number of repetitions of our sampling method. This allows for inference where, in the long-run, we can define that the combined results of multiple frequentist inferences to mean that a 95% confidence interval literally means the true mean lies in the confidence interval 95% of the time, but not that the mean is in a particular confidence interval with 95% certainty. This is a popular misconception. Very commonly the epistemic view and the epidemiological view are incorrectly regarded as interconvertible. First, the epistemic view is centered around Fisherian significance tests that are designed to provide inductive evidence against the null hypothesis, H 0 {\displaystyle H_{0}} , in a single experiment, and is defined by the Fisherian p-value. Conversely, the epidemiological view, conducted with Neyman-Pearson hypothesis testing, is designed to minimize the Type II false acceptance errors in the long-run by providing error minimizations that work in the long-run. The difference between the two is critical because the epistemic view stresses the conditions under which we might find one value to be statistically significant; meanwhile, the epidemiological view defines the conditions under which long-run results present valid results. These are extremely different inferences, because one-time, epistemic conclusions do not inform long-run errors, and long-run errors cannot be used to certify whether one-time experiments are sensical. The assumption of one-time experiments to long-run occurrences is a misattribution, and the assumption of long run trends to individuals experiments is an example of the ecological fallacy. == Relationship with other approaches == Frequentist inferences stand in contrast to other types of statistical inferences, such as Bayesian inferences and fiducial inferences. While the "Bayesian inference" is sometimes held to include the approach to inferences leading to optimal decisions, a more restricted view is taken here for simplicity. === Bayesian inference === Bayesian inference is based in Bayesian probability, which treats “probability” as equivalent with “certainty”, and thus that the essential difference between the frequentist inference and the Bayesian inference is the same as the difference between the two interpretations of what a “probability” means. However, where appropriate, Bayesian inferences (meaning in this case an application of Bayes' theorem) are used by those employing frequency probability. There are two major differences in the frequentist and Bayesian approaches to inference that are not included in the above consideration of the interpretation of probability: In a frequentist approach to inference, unknown parameters are typically considered as being fixed, rather than as being random variates. In contrast, a Bayesian approach allows probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true (see Bayesian probability - Personal probabilities and objective methods for constructing priors). The result of a Bayesian approach can be a probability distribution for what is known about the parameters given the results of the experiment or study. The result of a frequentist approach is either a decision from a significance test or a confidence interval. == See also == Intuitive statistics German tank problem == References == == Bibliography ==
Wikipedia/Frequentist_inference
Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing. == Objective and motivations == When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system. Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost. Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma. == Sources of variability == Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships. The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain. Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size. We can represent variance due to measurement uncertainties as a corrective factor B {\displaystyle B} , which is multiplied by the true mean X {\displaystyle X} to yield the measured mean of X ¯ {\displaystyle {\bar {X}}} . Equivalently, X ¯ = B ¯ X {\displaystyle {\bar {X}}={\bar {B}}X} . This yields the result B ¯ = X ¯ X {\displaystyle {\bar {B}}={\frac {\bar {X}}{X}}} , and the variance of the corrective factor B {\displaystyle B} is given as: V a r [ B ] = V a r [ X ¯ ] X = V a r [ X ] n X {\displaystyle Var[B]={\frac {Var[{\bar {X}}]}{X}}={\frac {Var[X]}{nX}}} where B {\displaystyle B} is the correction factor, X {\displaystyle X} is the true mean, X ¯ {\displaystyle {\bar {X}}} is the measured mean, and n {\displaystyle n} is the number of measurements made. The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available. The measured value H ^ ( ω ) {\displaystyle {\hat {H}}(\omega )} is equivalent to the theoretical model prediction H ( ω ) {\displaystyle H(\omega )} multiplied by a model error of ϕ ( ω ) {\displaystyle \phi (\omega )} , plus the experimental error ε ( ω ) {\displaystyle \varepsilon (\omega )} . Equivalently, H ^ ( ω ) = H ( ω ) ϕ ( ω ) + ε ( ω ) {\displaystyle {\hat {H}}(\omega )=H(\omega )\phi (\omega )+\varepsilon (\omega )} and the model error takes the general form: ϕ ( ω ) = ∑ i = 0 n a i ω n {\displaystyle \phi (\omega )=\sum _{i=0}^{n}a_{i}\omega ^{n}} where a i {\displaystyle a_{i}} are coefficients of regression determined from experimental data. Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability. == Comparison to classical design principles == Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world. The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value. Let the probability distribution function of the yield strength be given as f ( R ) {\displaystyle f(R)} . Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as f ( S ) {\displaystyle f(S)} . The probability of failure is equivalent to the area between these two distribution functions, mathematically: P f = P ( R < S ) = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( R ) f ( S ) d S d R {\displaystyle P_{f}=P(R<S)=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR} or equivalently, if we let the difference between yield stress and applied load equal a third function R − S = Q {\displaystyle R-S=Q} , then: P f = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( R ) f ( S ) d S d R = ∫ − ∞ 0 f ( Q ) d Q {\displaystyle P_{f}=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR=\int \limits _{-\infty }^{0}f(Q)dQ} where the variance of the mean difference Q {\displaystyle Q} is given by σ Q 2 = σ R 2 + σ S 2 {\displaystyle \sigma _{Q}^{2}={\sqrt {\sigma _{R}^{2}+\sigma _{S}^{2}}}} . The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength. It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no. == Methods used to determine variability == In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include: Finite element analysis Stochastic finite element method Boundary element method Meshfree methods Analytical methods (refer to classical design principles) Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include: the Monte Carlo method (including Latin hypercubes); propagation of error; design of experiments (DOE) the method of moments Statistical interference quality function deployment Failure mode and effects analysis == See also == Interval finite element Stochastic modeling First-order second-moment method Weibull distribution == Footnotes == == References == Ang and Tang (2006) Probability Concepts in Engineering: Emphasis on Applications to Civil and Environmental Engineering. John Wiley & Sons. ISBN 0-471-72064-X Ash (1993) The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else). Wiley-IEEE Press. ISBN 0-7803-1051-9 Clausing (1994) Total Quality Development: A Step-By-Step Guide to World-Class Concurrent Engineering. American Society of Mechanical Engineers. ISBN 0-7918-0035-0 Haugen (1980) Probabilistic mechanical design. Wiley. ISBN 0-471-05847-5 Papoulis (2002) Probability, Random Variables and Stochastic Process. McGraw-Hill Publishing Co. ISBN 0-07-119981-0 Siddall (1982) Optimal Engineering Design. CRC. ISBN 0-8247-1633-7 Dodson, B., Hammett, P., and Klerx, R. (2014) Probabilistic Design for Optimization and Robustness for Engineers John Wiley & Sons, Inc. ISBN 978-1-118-79619-1 Cederbaum G., Elishakoff I., Aboudi J. and Librescu L., Random Vibration and Reliability of Composite Structures, Technomic, Lancaster, 1992, XIII + pp. 191; ISBN 0 87762 865 3 Elishakoff I., Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994, VIII + pp. 296; ISBN 0 444 81624 0 Elishakoff I., Probabilistic Methods in the Theory of Structures: Random Strength of Materials, Random Vibration, and Buckling, World Scientific, Singapore, ISBN 978-981-3149-84-7, 2017 == External links == Probabilistic design Non Deterministic Approaches in Engineering
Wikipedia/Probabilistic_design
In the statistical area of survival analysis, an accelerated failure time model (AFT model) is a parametric model that provides an alternative to the commonly used proportional hazards models. Whereas a proportional hazards model assumes that the effect of a covariate is to multiply the hazard by some constant, an AFT model assumes that the effect of a covariate is to accelerate or decelerate the life course of a disease by some constant. There is strong basic science evidence from C. elegans experiments by Stroustrup et al. indicating that AFT models are the correct model for biological survival processes. == Model specification == In full generality, the accelerated failure time model can be specified as λ ( t | θ ) = θ λ 0 ( θ t ) {\displaystyle \lambda (t|\theta )=\theta \lambda _{0}(\theta t)} where θ {\displaystyle \theta } denotes the joint effect of covariates, typically θ = exp ⁡ ( − [ β 1 X 1 + ⋯ + β p X p ] ) {\displaystyle \theta =\exp(-[\beta _{1}X_{1}+\cdots +\beta _{p}X_{p}])} . (Specifying the regression coefficients with a negative sign implies that high values of the covariates increase the survival time, but this is merely a sign convention; without a negative sign, they increase the hazard.) This is satisfied if the probability density function of the event is taken to be f ( t | θ ) = θ f 0 ( θ t ) {\displaystyle f(t|\theta )=\theta f_{0}(\theta t)} ; it then follows for the survival function that S ( t | θ ) = S 0 ( θ t ) {\displaystyle S(t|\theta )=S_{0}(\theta t)} . From this it is easy to see that the moderated life time T {\displaystyle T} is distributed such that T θ {\displaystyle T\theta } and the unmoderated life time T 0 {\displaystyle T_{0}} have the same distribution. Consequently, log ⁡ ( T ) {\displaystyle \log(T)} can be written as log ⁡ ( T ) = − log ⁡ ( θ ) + log ⁡ ( T θ ) := − log ⁡ ( θ ) + ϵ {\displaystyle \log(T)=-\log(\theta )+\log(T\theta ):=-\log(\theta )+\epsilon } where the last term is distributed as log ⁡ ( T 0 ) {\displaystyle \log(T_{0})} , i.e., independently of θ {\displaystyle \theta } . This reduces the accelerated failure time model to regression analysis (typically a linear model) where − log ⁡ ( θ ) {\displaystyle -\log(\theta )} represents the fixed effects, and ϵ {\displaystyle \epsilon } represents the noise. Different distributions of ϵ {\displaystyle \epsilon } imply different distributions of T 0 {\displaystyle T_{0}} , i.e., different baseline distributions of the survival time. Typically, in survival-analytic contexts, many of the observations are censored: we only know that T i > t i {\displaystyle T_{i}>t_{i}} , not T i = t i {\displaystyle T_{i}=t_{i}} . In fact, the former case represents survival, while the later case represents an event/death/censoring during the follow-up. These right-censored observations can pose technical challenges for estimating the model, if the distribution of T 0 {\displaystyle T_{0}} is unusual. The interpretation of θ {\displaystyle \theta } in accelerated failure time models is straightforward: θ = 2 {\displaystyle \theta =2} means that everything in the relevant life history of an individual happens twice as fast. For example, if the model concerns the development of a tumor, it means that all of the pre-stages progress twice as fast as for the unexposed individual, implying that the expected time until a clinical disease is 0.5 of the baseline time. However, this does not mean that the hazard function λ ( t | θ ) {\displaystyle \lambda (t|\theta )} is always twice as high - that would be the proportional hazards model. == Statistical issues == Unlike proportional hazards models, in which Cox's semi-parametric proportional hazards model is more widely used than parametric models, AFT models are predominantly fully parametric i.e. a probability distribution is specified for log ⁡ ( T 0 ) {\displaystyle \log(T_{0})} . (Buckley and James proposed a semi-parametric AFT but its use is relatively uncommon in applied research; in a 1992 paper, Wei pointed out that the Buckley–James model has no theoretical justification and lacks robustness, and reviewed alternatives.) This can be a problem, if a degree of realistic detail is required for modelling the distribution of a baseline lifetime. Hence, technical developments in this direction would be highly desirable. When a frailty term is incorporated in the survival model, the regression parameter estimates from AFT models are robust to omitted covariates, unlike proportional hazards models. They are also less affected by the choice of probability distribution for the frailty term. The results of AFT models are easily interpreted. For example, the results of a clinical trial with mortality as the endpoint could be interpreted as a certain percentage increase in future life expectancy on the new treatment compared to the control. So a patient could be informed that he would be expected to live (say) 15% longer if he took the new treatment. Hazard ratios can prove harder to explain in layman's terms. === Distributions used in AFT models === The log-logistic distribution provides the most commonly used AFT model. Unlike the Weibull distribution, it can exhibit a non-monotonic hazard function which increases at early times and decreases at later times. It is somewhat similar in shape to the log-normal distribution but it has heavier tails. The log-logistic cumulative distribution function has a simple closed form, which becomes important computationally when fitting data with censoring. For the censored observations one needs the survival function, which is the complement of the cumulative distribution function, i.e. one needs to be able to evaluate S ( t | θ ) = 1 − F ( t | θ ) {\displaystyle S(t|\theta )=1-F(t|\theta )} . The Weibull distribution (including the exponential distribution as a special case) can be parameterised as either a proportional hazards model or an AFT model, and is the only family of distributions to have this property. The results of fitting a Weibull model can therefore be interpreted in either framework. However, the biological applicability of this model may be limited by the fact that the hazard function is monotonic, i.e. either decreasing or increasing. Any distribution on a multiplicatively closed group, such as the positive real numbers, is suitable for an AFT model. Other distributions include the log-normal, gamma, hypertabastic, Gompertz distribution, and inverse Gaussian distributions, although they are less popular than the log-logistic, partly as their cumulative distribution functions do not have a closed form. Finally, the generalized gamma distribution is a three-parameter distribution that includes the Weibull, log-normal and gamma distributions as special cases. == References == == Further reading == Bradburn, MJ; Clark, TG; Love, SB; Altman, DG (2003), "Survival Analysis Part II: Multivariate data analysis - an introduction to concepts and methods", British Journal of Cancer, 89 (3): 431–436, doi:10.1038/sj.bjc.6601119, PMC 2394368, PMID 12888808 Hougaard, Philip (1999), "Fundamentals of Survival Data", Biometrics, 55 (1): 13–22, doi:10.1111/j.0006-341X.1999.00013.x, PMID 11318147 Collett, D. (2003), Modelling Survival Data in Medical Research (2nd ed.), CRC press, ISBN 978-1-58488-325-8 Cox, David Roxbee; Oakes, D. (1984), Analysis of Survival Data, CRC Press, ISBN 978-0-412-24490-2 Marubini, Ettore; Valsecchi, Maria Grazia (1995), Analysing Survival Data from Clinical Trials and Observational Studies, Wiley, ISBN 978-0-470-09341-2 Martinussen, Torben; Scheike, Thomas (2006), Dynamic Regression Models for Survival Data, Springer, ISBN 0-387-20274-9 Bagdonavicius, Vilijandas; Nikulin, Mikhail (2002), Accelerated Life Models. Modeling and Statistical Analysis, Chapman&Hall/CRC, ISBN 1-58488-186-0
Wikipedia/Accelerated_failure_time_model
Demographic statistics are measures of the characteristics of, or changes to, a population. Records of births, deaths, marriages, immigration and emigration and a regular census of population provide information that is key to making sound decisions about national policy. A useful summary of such data is the population pyramid. It provides data about the sex and age distribution of the population in an accessible graphical format. Another summary is called the life table. For a cohort of persons born in the same year, it traces and projects their life experiences from birth to death. For a given cohort, the proportion expected to survive each year (or decade in an abridged life table) is presented in tabular or graphical form. The ratio of males to females by age indicates the consequences of differing mortality rates on the sexes. Thus, while values above one are common for newborns, the ratio dwindles until it is well below one for the older population. == Collection == National population statistics are usually collected by conducting a census. However, because these are usually huge logistical exercises, countries normally conduct censuses only once every five to 10 years. Even when a census is conducted it may miss counting everyone (known as undercount). Also, some people counted in the census may be recorded in a different place than where they usually live, because they are travelling, for example (this may result in overcounting). Consequently, raw census numbers are often adjusted to produce census estimates that identify such statistics as resident population, residents, tourists and other visitors, nationals and aliens (non-nationals). For privacy reasons, particularly when there are small counts, some census results may be rounded, often to the nearest ten, hundred, thousand and sometimes randomly up, down or to another small number such as within 3 of the actual count. Between censuses, administrative data collected by various agencies about population events such as births, deaths, and cross-border migration may be used to produce intercensal estimates. == Population estimates and projections == Population estimates are usually derived from census and other administrative data. Population estimates are normally produced after the date the estimate is for. Some estimates, such as the Usually resident population estimate who usually lives in a locality as at the census date, even though the census did not count them within that locality. Census questions usually include a questions about where a person usually lives, whether they are a resident or visitor, or also live somewhere else, to allow these estimates to be made. Other estimates are concerned with estimating population on a particular date that is different from the census date, for example the middle or end of a calendar or financial year. These estimates often use birth and death records and migration data to adjust census counts for the changes that have happened since the census. Population projections are produced in advance of the date they are for. They use time series analysis of existing census data and other sources of population information to forecast the size of future populations. Because there are unknown factors that may affect future population changes, population projections often incorporate high and low as well as expected values for future populations. Population projections are often recomputed after a census has been conducted. It depends on how the area is adjusted in a particular demarcation. == History == While many censuses were conducted in antiquity, there are few population statistics that survive. One example though can be found in the Bible, in chapter 1 of the Book of Numbers. Not only are the statistics given, but the method used to compile those statistics is also described. In modern-day terms, this metadata about the census is probably of as much value as the statistics themselves as it allows researchers to determine not only what was being counted but how and why it was done. == Metadata == Modern population statistics are normally accompanied by metadata that explains how the statistics have been compiled and adjusted to compensate for any collection issues. == Statistical sources == Most countries have a census bureau or government agency responsible for conducting censuses. Many of these agencies publish their country's census results and other population statistics on their agency's website. == See also == Demographic window Census - Census Bureau, Census tract, Census block group, Census block. Intercensal estimate Population projection == References == == Further reading == "Human Population Numbers as a Function of Food Supply". Russell Hopfenberg (1 Duke University, Durham, NC, USA;)* and David Pimentel (2 Cornell University, Ithaca, NY, USA). 6 March 2001.
Wikipedia/Demographic_statistics
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines. SPC must be practiced in two phases: the first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures). An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred. In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped. == History == Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II. W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry. === 'Common' and 'special' sources of variation === Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control. === Application to non-manufacturing processes === Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system. In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept. The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial. In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing. == Variation in manufacturing == In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes. (1) Common causes 'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time. (2) Special causes 'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable. Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights. If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced). From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation. == Application == The application of SPC involves three main phases of activity: Understanding the process and the specification limits. Eliminating assignable (special) sources of variation, so that the process is stable. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation. The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations. === Control charts === The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time. ==== Stable process ==== When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future. A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index. ==== Excessive variations ==== When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs. ==== Process stability metrics ==== When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups. == Mathematics of control charts == Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example, derived value = last value + average absolute difference between the last N numbers. == See also == ANOVA Gauge R&R Distribution-free control chart Electronic design automation Industrial engineering Process Window Index Process capability index Quality assurance Reliability engineering Six sigma Stochastic control Total quality management == References == == Bibliography == == External links == MIT Course - Control of Manufacturing Processes Guthrie, William F. (2012). "NIST/SEMATECH e-Handbook of Statistical Methods". National Institute of Standards and Technology. doi:10.18434/M32189.
Wikipedia/Statistical_process_control
A scientific control is an experiment or observation designed to minimize the effects of variables other than the independent variable (i.e. confounding variables). This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. == Controlled experiments == Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality. For example, if a researcher feeds an experimental artificial sweetener to sixty laboratories rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled. The simplest types of control are negative and positive controls, and both are found in many different types of experiments. These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls. == Confounding == Confounding is a critical issue in observational studies because it can lead to biased or misleading conclusions about relationships between variables. A confounder is an extraneous variable that is related to both the independent variable (treatment or exposure) and the dependent variable (outcome), potentially distorting the true association. If confounding is not properly accounted for, researchers might incorrectly attribute an effect to the exposure when it is actually due to another factor. This can result in incorrect policy recommendations, ineffective interventions, or flawed scientific understanding. For example, in a study examining the relationship between physical activity and heart disease, failure to control for diet, a potential confounder, could lead to an overestimation or underestimation of the true effect of exercise. Falsification tests are a robustness-checking technique used in observational studies to assess whether observed associations are likely due to confounding, bias, or model misspecification rather than a true causal effect. These tests help validate findings by applying the same analytical approach to a scenario where no effect is expected. If an association still appears where none should exist, it raises concerns that the primary analysis may suffer from confounding or other biases. Negative controls are one type of falsification tests. The need to use negative controls usually arise in observational studies, when the study design can be questioned because of a potential confounding mechanism. A Negative control test can reject study design, but it cannot validate them. Either because there might be another confounding mechanism, or because of low statistical power. Negative controls are increasingly used in the epidemiology literature, but they show promise in social sciences fields such as economics. Negative controls are divided into two main categories: Negative Control Exposures (NCEs) and Negative Control Outcomes (NCOs). Lousdal et al. examined the effect of screening participation on death from breast cancer. They hypothesized that screening participants are healthier than non-participants and, therefore, already at baseline have a lower risk of breast-cancer death. Therefore, they used proxies for better health as negative-control outcomes (NCOs) and proxies for healthier behavior as negative-control exposures (NCEs). Death from causes other than breast cancer was taken as NCO, as it is an outcome of better health, not effected by breast cancer screening. Dental care participation was taken to be NCE, as it is assumed to be a good proxy of health attentive behavior. == Negative control == Negative controls are variables that meant to help when the study design is suspected to be invalid because of unmeasured confounders that are correlated with both the treatment and the outcome. Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment. In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group. === Negative Control Exposure (NCE) === NCE is a variable that should not causally affect the outcome, but may suffer from the same confounding as the exposure-outcome relationship in question. A priori, there should be no statistical association between the NCE and the outcome. If an association is found, then it through the unmeasured confounder, and since the NCE and treatment share the same confounding mechanism, there is an alternative path, apart from the direct path from the treatment to the outcome. In that case, the study design is invalid. For example, Yerushalmy used husband's smoking as an NCE. The exposure was maternal smoking; the outcomes were various birth factors, such as incidence of low birth weight, length of pregnancy, and neonatal mortality rates. It is assumed that husband's smoking share common confounders, such household health lifestyle with the pregnant woman's smoking, but it does not causally affect the fetus development. Nonetheless, Yerushalmy found a statistical association, And as a result, it casts doubt on the proposition that cigarette smoking causally interferes with intrauterine development of the fetus. ==== Differences Between Negative Control Exposures and Placebo ==== The term negative controls is used when the study is based on observations, while the Placebo should be used as a non-treatment in randomized control trials. === Negative Control Outcome (NCO) === Negative Control Outcomes are the more popular type of negative controls. NCO is a variable that is not causally affected by the treatment, but suspected to have a similar confounding mechanism as the treatment-outcome relationship. If the study design is valid, there should be no statistical association between the NCO and the treatment. Thus, an association between them suggest that the design is invalid. For example, Jackson et al. used mortality from all causes outside of influenza season an NCO in a study examining influenza vaccine's effect on influenza-related deaths. A possible confounding mechanism is health status and lifestyle, such as the people who are more healthy in general also tend to take the influenza vaccine. Jackson et al. found that a preferential receipt of vaccine by relatively healthy seniors, and that differences in health status between vaccinated and unvaccinated groups leads to bias in estimates of influenza vaccine effectiveness. In a similar example, when discussing the impact of air pollutants on asthma hospital admissions, Sheppard et al. et al. used non-elderly appendicitis hospital admissions as NCO. ==== Formal Conditions ==== Given a treatment A {\displaystyle A} and an outcome Y {\displaystyle Y} , in the presence of a set of control variables X {\displaystyle X} , and unmeasured confounder U {\displaystyle U} for the A − Y {\displaystyle A-Y} relationship. Shi et al. presented formal conditions for a negative control outcome Y ~ {\displaystyle {\tilde {Y}}} , Stable Unit Treatment Value Assumption (SUTVA): For both Y {\displaystyle {Y}} and Y ~ {\displaystyle {\tilde {Y}}} with regard to A = a {\displaystyle A=a} . Latent Exchangeability: Y A = a ⊥ A | X , U {\displaystyle Y^{A=a}\perp A|\;X,U} Given X {\displaystyle X} and U {\displaystyle U} , the potential outcome Y A = a {\displaystyle Y^{A=a}} is independent of the treatment. Irrelevancy: Ensures the irrelevancy of the treatment on the NCO. Y ~ A = a = Y ~ A = a ′ = Y ~ | U , X {\displaystyle {\tilde {Y}}^{A=a}={\tilde {Y}}^{A=a'}={\tilde {Y}}|\;U,X} : There is no causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} given X {\displaystyle X} and U {\displaystyle U} . Y ~ ⊥ A | U , X {\displaystyle {\tilde {Y}}\perp A|\;U,X} : There is no causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} given X {\displaystyle X} and U {\displaystyle U} . The NCO is independent of the treatment given X {\displaystyle X} and U {\displaystyle U} . U-Comparability: Y ~ ⧸ ⊥ U | X {\displaystyle {\tilde {Y}}\not {\perp }U|\;X} The unmeasured confounders U {\displaystyle U} of the association between A {\displaystyle A} and Y {\displaystyle Y} are the same for the association between A {\displaystyle A} and Y ~ {\displaystyle {\tilde {Y}}} . Given assumption 1 - 4, a non-null association between A {\displaystyle A} and Y ~ {\displaystyle {\tilde {Y}}} , can be explained by U {\displaystyle U} , and not by another mechanism. A possible violation of Latent Exchangeability will be when only the people that are influenced by a medicine will take it, even if both X {\displaystyle X} and U {\displaystyle U} are the same. For example, we would expect that given age and medical history ( X {\displaystyle X} ), general health awareness ( U {\displaystyle U} ), the intake of A {\displaystyle A} influenza vaccine will be independent of potential influenza related deaths Y ~ A = a {\displaystyle {\tilde {Y}}^{A=a}} . Otherwise, the Latent Exchangeability assumption is violated, and no identification can be made. A violation of Irrelevancy occurs when there is a causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} . For example, we would expect that given X {\displaystyle X} and U {\displaystyle U} , the influenza vaccine does not influence all-cause mortality. If, however, during the influenza vaccine medical visit, the physician also performs a general physical test, recommends good health habits, and prescribes vitamins and essential drugs. In this case, there is likely a causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} (conditional on X {\displaystyle X} and U {\displaystyle U} ). Therefore, Y ~ {\displaystyle {\tilde {Y}}} cannot be used as NCO, as the test might fail even if the causal design is valid. U-Comparability is violated when Y ~ ⊥ U {\displaystyle {\tilde {Y}}{\perp }U} , and therefore the lack of association between A {\displaystyle A} and Y ~ {\displaystyle {\tilde {Y}}} does not provide us any evidence for the invalidity of A {\displaystyle A} . This violation would occur when we choose a poor NCO, that is not or very weakly correlated with the unmeasured confounders. == Positive control == Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes. Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity. If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did. When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme. == Randomization == In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors. For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield. == Blind experiments == Blinding is the practice of withholding information that may bias an experiment. For example, participants may not know who received an active treatment and who received a placebo. If this information were to become available to trial participants, patients could receive a larger placebo effect, researchers could influence the experiment to meet their expectations (the observer effect), and evaluators could be subject to confirmation bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, sham surgery may be necessary to achieve blinding. During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported. Meta-research has revealed high levels of unblinding in pharmacological trials. In particular, antidepressant trials are poorly blinded. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding. Blinding is an important tool of the scientific method, and is used in many fields of research. In some fields, such as medicine, it is considered essential. In clinical research, a trial that is not blinded trial is called an open trial. == See also == False positives and false negatives Designed experiment Controlling for a variable James Lind cured scurvy using a controlled experiment that has been described as the first clinical trial. Randomized controlled trial Wait list control group == References == == External links == "Control" . Encyclopædia Britannica. Vol. 7 (11th ed.). 1911.
Wikipedia/Scientific_control