text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, the Laplace transform, named after Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t {\displaystyle t} , in the time domain) to a function of a complex variable s {\displaystyle s} (in the complex-valued frequency domain, also known as s-domain, or s-plane). The transform is useful for converting differentiation and integration in the time domain into much easier multiplication and division in the Laplace domain (analogous to how logarithms are useful for simplifying multiplication and division into addition and subtraction). This gives the transform many applications in science and engineering, mostly as a tool for solving linear differential equations and dynamical systems by simplifying ordinary differential equations and integral equations into algebraic polynomial equations, and by simplifying convolution into multiplication. Once solved, the inverse Laplace transform reverts to the original domain. The Laplace transform is defined (for suitable functions f {\displaystyle f} ) by the integral L { f } ( s ) = ∫ 0 ∞ f ( t ) e − s t d t , {\displaystyle {\mathcal {L}}\{f\}(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt,} where s is a complex number. It is related to many other transforms, most notably the Fourier transform and the Mellin transform. Formally, the Laplace transform is converted into a Fourier transform by the substitution s = i ω {\displaystyle s=i\omega } where ω {\displaystyle \omega } is real. However, unlike the Fourier transform, which gives the decomposition of a function into its components in each frequency, the Laplace transform of a function with suitable decay is an analytic function, and so has a convergent power series, the coefficients of which give the decomposition of a function into its moments. Also unlike the Fourier transform, when regarded in this way as an analytic function, the techniques of complex analysis, and especially contour integrals, can be used for calculations. == History == The Laplace transform is named after mathematician and astronomer Pierre-Simon, Marquis de Laplace, who used a similar transform in his work on probability theory. Laplace wrote extensively about the use of generating functions (1814), and the integral form of the Laplace transform evolved naturally as a result. Laplace's use of generating functions was similar to what is now known as the z-transform, and he gave little attention to the continuous variable case which was discussed by Niels Henrik Abel. From 1744, Leonhard Euler investigated integrals of the form z = ∫ X ( x ) e a x d x and z = ∫ X ( x ) x A d x {\displaystyle z=\int X(x)e^{ax}\,dx\quad {\text{ and }}\quad z=\int X(x)x^{A}\,dx} as solutions of differential equations, introducing in particular the gamma function. Joseph-Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form ∫ X ( x ) e − a x a x d x , {\displaystyle \int X(x)e^{-ax}a^{x}\,dx,} which resembles a Laplace transform. These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form ∫ x s φ ( x ) d x , {\displaystyle \int x^{s}\varphi (x)\,dx,} akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power. Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space, because those solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space. In 1821, Cauchy developed an operational calculus for the Laplace transform that could be used to study linear differential equations in much the same way the transform is now used in basic engineering. This method was popularized, and perhaps rediscovered, by Oliver Heaviside around the turn of the century. Bernhard Riemann used the Laplace transform in his 1859 paper On the number of primes less than a given magnitude, in which he also developed the inversion theorem. Riemann used the Laplace transform to develop the functional equation of the Riemann zeta function, and this method is still used to relate the modular transformation law of the Jacobi theta function to the functional equation . Hjalmar Mellin was among the first to study the Laplace transform, rigorously in the Karl Weierstrass school of analysis, and apply it to the study of differential equations and special functions, at the turn of the 20th century. At around the same time, Heaviside was busy with his operational calculus. Thomas Joannes Stieltjes considered a generalization of the Laplace transform connected to his work on moments. Other contributors in this time period included Mathias Lerch, Oliver Heaviside, and Thomas Bromwich. In 1929, Vannevar Bush and Norbert Wiener published Operational Circuit Analysis as a text for engineering analysis of electrical circuits, applying both Fourier transforms and operational calculus, and in which they included one of the first predecessors of the modern table of Laplace transforms. In 1934, Raymond Paley and Norbert Wiener published the important work Fourier transforms in the complex domain, about what is now called the Laplace transform (see below). Also during the 30s, the Laplace transform was instrumental in G H Hardy and John Edensor Littlewood's study of tauberian theorems, and this application was later expounded on by Widder (1941), who developed other aspects of the theory such as a new method for inversion. Edward Charles Titchmarsh wrote the influential Introduction to the theory of the Fourier integral (1937). The current widespread use of the transform (mainly in engineering) came about during and soon after World War II, replacing the earlier Heaviside operational calculus. The advantages of the Laplace transform had been emphasized by Gustav Doetsch, to whom the name Laplace transform is apparently due. == Formal definition == The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), which is a unilateral transform defined by where s is a complex frequency-domain parameter s = σ + i ω {\displaystyle s=\sigma +i\omega } with real numbers σ and ω. An alternate notation for the Laplace transform is L { f } {\displaystyle {\mathcal {L}}\{f\}} instead of F, often written as F ( s ) = L { f ( t ) } {\displaystyle F(s)={\mathcal {L}}\{f(t)\}} in an abuse of notation. The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that f must be locally integrable on [0, ∞). For locally integrable functions that decay at infinity or are of exponential type ( | f ( t ) | ≤ A e B | t | {\displaystyle |f(t)|\leq Ae^{B|t|}} ), the integral can be understood to be a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at ∞. Still more generally, the integral can be understood in a weak sense, and this is dealt with below. One can define the Laplace transform of a finite Borel measure μ by the Lebesgue integral L { μ } ( s ) = ∫ [ 0 , ∞ ) e − s t d μ ( t ) . {\displaystyle {\mathcal {L}}\{\mu \}(s)=\int _{[0,\infty )}e^{-st}\,d\mu (t).} An important special case is where μ is a probability measure, for example, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density function f. In that case, to avoid potential confusion, one often writes L { f } ( s ) = ∫ 0 − ∞ f ( t ) e − s t d t , {\displaystyle {\mathcal {L}}\{f\}(s)=\int _{0^{-}}^{\infty }f(t)e^{-st}\,dt,} where the lower limit of 0− is shorthand notation for lim ε → 0 + ∫ − ε ∞ . {\displaystyle \lim _{\varepsilon \to 0^{+}}\int _{-\varepsilon }^{\infty }.} This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform. === Bilateral Laplace transform === When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform, or two-sided Laplace transform, by extending the limits of integration to be the entire real axis. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by the Heaviside step function. The bilateral Laplace transform F(s) is defined as follows: An alternate notation for the bilateral Laplace transform is B { f } {\displaystyle {\mathcal {B}}\{f\}} , instead of F. === Inverse Laplace transform === Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range. Typical function spaces in which this is true include the spaces of bounded continuous functions, the space L∞(0, ∞), or more generally tempered distributions on (0, ∞). The Laplace transform is also defined and injective for suitable spaces of tempered distributions. In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin's inverse formula): where γ is a real number so that the contour path of integration is in the region of convergence of F(s). In most applications, the contour can be closed, allowing the use of the residue theorem. An alternative formula for the inverse Laplace transform is given by Post's inversion formula. The limit here is interpreted in the weak-* topology. In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table and construct the inverse by inspection. === Probability theory === In pure and applied probability, the Laplace transform is defined as an expected value. If X is a random variable with probability density function f, then the Laplace transform of f is given by the expectation L { f } ( s ) = E ⁡ [ e − s X ] , {\displaystyle {\mathcal {L}}\{f\}(s)=\operatorname {E} \left[e^{-sX}\right],} where E ⁡ [ r ] {\displaystyle \operatorname {E} [r]} is the expectation of random variable r {\displaystyle r} . By convention, this is referred to as the Laplace transform of the random variable X itself. Here, replacing s by −t gives the moment generating function of X. The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory. Of particular use is the ability to recover the cumulative distribution function of a continuous random variable X by means of the Laplace transform as follows: F X ( x ) = L − 1 { 1 s E ⁡ [ e − s X ] } ( x ) = L − 1 { 1 s L { f } ( s ) } ( x ) . {\displaystyle F_{X}(x)={\mathcal {L}}^{-1}\left\{{\frac {1}{s}}\operatorname {E} \left[e^{-sX}\right]\right\}(x)={\mathcal {L}}^{-1}\left\{{\frac {1}{s}}{\mathcal {L}}\{f\}(s)\right\}(x).} === Algebraic construction === The Laplace transform can be alternatively defined in a purely algebraic manner by applying a field of fractions construction to the convolution ring of functions on the positive half-line. The resulting space of abstract operators is exactly equivalent to Laplace space, but in this construction the forward and reverse transforms never need to be explicitly defined (avoiding the related difficulties with proving convergence). == Region of convergence == If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit lim R → ∞ ∫ 0 R f ( t ) e − s t d t {\displaystyle \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt} exists. The Laplace transform converges absolutely if the integral ∫ 0 ∞ | f ( t ) e − s t | d t {\displaystyle \int _{0}^{\infty }\left|f(t)e^{-st}\right|\,dt} exists as a proper Lebesgue integral. The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former but not in the latter sense. The set of values for which F(s) converges absolutely is either of the form Re(s) > a or Re(s) ≥ a, where a is an extended real constant with −∞ ≤ a ≤ ∞ (a consequence of the dominated convergence theorem). The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem. Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral F ( s ) = ( s − s 0 ) ∫ 0 ∞ e − ( s − s 0 ) t β ( t ) d t , β ( u ) = ∫ 0 u e − s 0 t f ( t ) d t . {\displaystyle F(s)=(s-s_{0})\int _{0}^{\infty }e^{-(s-s_{0})t}\beta (t)\,dt,\quad \beta (u)=\int _{0}^{u}e^{-s_{0}t}f(t)\,dt.} That is, F(s) can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. There are several Paley–Wiener theorems concerning the relationship between the decay properties of f, and the properties of the Laplace transform within the region of convergence. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part. This ROC is used in knowing about the causality and stability of a system. == Properties and theorems == The Laplace transform's key property is that it converts differentiation and integration in the time domain into multiplication and division by s in the Laplace domain. Thus, the Laplace variable s is also known as an operator variable in the Laplace domain: either the derivative operator or (for s−1) the integration operator. Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s), f ( t ) = L − 1 { F ( s ) } , g ( t ) = L − 1 { G ( s ) } , {\displaystyle {\begin{aligned}f(t)&={\mathcal {L}}^{-1}\{F(s)\},\\g(t)&={\mathcal {L}}^{-1}\{G(s)\},\end{aligned}}} the following table is a list of properties of unilateral Laplace transform: Initial value theorem f ( 0 + ) = lim s → ∞ s F ( s ) . {\displaystyle f(0^{+})=\lim _{s\to \infty }{sF(s)}.} Final value theorem f ( ∞ ) = lim s → 0 s F ( s ) {\displaystyle f(\infty )=\lim _{s\to 0}{sF(s)}} , if all poles of s F ( s ) {\displaystyle sF(s)} are in the left half-plane. The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions (or other difficult algebra). If F(s) has a pole in the right-hand plane or poles on the imaginary axis (e.g., if f ( t ) = e t {\displaystyle f(t)=e^{t}} or f ( t ) = sin ⁡ ( t ) {\displaystyle f(t)=\sin(t)} ), then the behaviour of this formula is undefined. === Relation to power series === The Laplace transform can be viewed as a continuous analogue of a power series. If a(n) is a discrete function of a positive integer n, then the power series associated to a(n) is the series ∑ n = 0 ∞ a ( n ) x n {\displaystyle \sum _{n=0}^{\infty }a(n)x^{n}} where x is a real variable (see Z-transform). Replacing summation over n with integration over t, a continuous version of the power series becomes ∫ 0 ∞ f ( t ) x t d t {\displaystyle \int _{0}^{\infty }f(t)x^{t}\,dt} where the discrete function a(n) is replaced by the continuous one f(t). Changing the base of the power from x to e gives ∫ 0 ∞ f ( t ) ( e ln ⁡ x ) t d t {\displaystyle \int _{0}^{\infty }f(t)\left(e^{\ln {x}}\right)^{t}\,dt} For this to converge for, say, all bounded functions f, it is necessary to require that ln x < 0. Making the substitution −s = ln x gives just the Laplace transform: ∫ 0 ∞ f ( t ) e − s t d t {\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt} In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameter n is replaced by the continuous parameter t, and x is replaced by e−s. === Relation to moments === The quantities μ n = ∫ 0 ∞ t n f ( t ) d t {\displaystyle \mu _{n}=\int _{0}^{\infty }t^{n}f(t)\,dt} are the moments of the function f. If the first n moments of f converge absolutely, then by repeated differentiation under the integral, ( − 1 ) n ( L f ) ( n ) ( 0 ) = μ n . {\displaystyle (-1)^{n}({\mathcal {L}}f)^{(n)}(0)=\mu _{n}.} This is of special significance in probability theory, where the moments of a random variable X are given by the expectation values μ n = E ⁡ [ X n ] {\displaystyle \mu _{n}=\operatorname {E} [X^{n}]} . Then, the relation holds μ n = ( − 1 ) n d n d s n E ⁡ [ e − s X ] ( 0 ) . {\displaystyle \mu _{n}=(-1)^{n}{\frac {d^{n}}{ds^{n}}}\operatorname {E} \left[e^{-sX}\right](0).} === Transform of a function's derivative === It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows: L { f ( t ) } = ∫ 0 − ∞ e − s t f ( t ) d t = [ f ( t ) e − s t − s ] 0 − ∞ − ∫ 0 − ∞ e − s t − s f ′ ( t ) d t (by parts) = [ − f ( 0 − ) − s ] + 1 s L { f ′ ( t ) } , {\displaystyle {\begin{aligned}{\mathcal {L}}\left\{f(t)\right\}&=\int _{0^{-}}^{\infty }e^{-st}f(t)\,dt\\[6pt]&=\left[{\frac {f(t)e^{-st}}{-s}}\right]_{0^{-}}^{\infty }-\int _{0^{-}}^{\infty }{\frac {e^{-st}}{-s}}f'(t)\,dt\quad {\text{(by parts)}}\\[6pt]&=\left[-{\frac {f(0^{-})}{-s}}\right]+{\frac {1}{s}}{\mathcal {L}}\left\{f'(t)\right\},\end{aligned}}} yielding L { f ′ ( t ) } = s ⋅ L { f ( t ) } − f ( 0 − ) , {\displaystyle {\mathcal {L}}\{f'(t)\}=s\cdot {\mathcal {L}}\{f(t)\}-f(0^{-}),} and in the bilateral case, L { f ′ ( t ) } = s ∫ − ∞ ∞ e − s t f ( t ) d t = s ⋅ L { f ( t ) } . {\displaystyle {\mathcal {L}}\{f'(t)\}=s\int _{-\infty }^{\infty }e^{-st}f(t)\,dt=s\cdot {\mathcal {L}}\{f(t)\}.} The general result L { f ( n ) ( t ) } = s n ⋅ L { f ( t ) } − s n − 1 f ( 0 − ) − ⋯ − f ( n − 1 ) ( 0 − ) , {\displaystyle {\mathcal {L}}\left\{f^{(n)}(t)\right\}=s^{n}\cdot {\mathcal {L}}\{f(t)\}-s^{n-1}f(0^{-})-\cdots -f^{(n-1)}(0^{-}),} where f ( n ) {\displaystyle f^{(n)}} denotes the nth derivative of f, can then be established with an inductive argument. === Evaluating integrals over the positive real axis === A useful property of the Laplace transform is the following: ∫ 0 ∞ f ( x ) g ( x ) d x = ∫ 0 ∞ ( L f ) ( s ) ⋅ ( L − 1 g ) ( s ) d s {\displaystyle \int _{0}^{\infty }f(x)g(x)\,dx=\int _{0}^{\infty }({\mathcal {L}}f)(s)\cdot ({\mathcal {L}}^{-1}g)(s)\,ds} under suitable assumptions on the behaviour of f , g {\displaystyle f,g} in a right neighbourhood of 0 {\displaystyle 0} and on the decay rate of f , g {\displaystyle f,g} in a left neighbourhood of ∞ {\displaystyle \infty } . The above formula is a variation of integration by parts, with the operators d d x {\displaystyle {\frac {d}{dx}}} and ∫ d x {\displaystyle \int \,dx} being replaced by L {\displaystyle {\mathcal {L}}} and L − 1 {\displaystyle {\mathcal {L}}^{-1}} . Let us prove the equivalent formulation: ∫ 0 ∞ ( L f ) ( x ) g ( x ) d x = ∫ 0 ∞ f ( s ) ( L g ) ( s ) d s . {\displaystyle \int _{0}^{\infty }({\mathcal {L}}f)(x)g(x)\,dx=\int _{0}^{\infty }f(s)({\mathcal {L}}g)(s)\,ds.} By plugging in ( L f ) ( x ) = ∫ 0 ∞ f ( s ) e − s x d s {\displaystyle ({\mathcal {L}}f)(x)=\int _{0}^{\infty }f(s)e^{-sx}\,ds} the left-hand side turns into: ∫ 0 ∞ ∫ 0 ∞ f ( s ) g ( x ) e − s x d s d x , {\displaystyle \int _{0}^{\infty }\int _{0}^{\infty }f(s)g(x)e^{-sx}\,ds\,dx,} but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side. This method can be used to compute integrals that would otherwise be difficult to compute using elementary methods of real calculus. For example, ∫ 0 ∞ sin ⁡ x x d x = ∫ 0 ∞ L ( 1 ) ( x ) sin ⁡ x d x = ∫ 0 ∞ 1 ⋅ L ( sin ) ( x ) d x = ∫ 0 ∞ d x x 2 + 1 = π 2 . {\displaystyle \int _{0}^{\infty }{\frac {\sin x}{x}}dx=\int _{0}^{\infty }{\mathcal {L}}(1)(x)\sin xdx=\int _{0}^{\infty }1\cdot {\mathcal {L}}(\sin )(x)dx=\int _{0}^{\infty }{\frac {dx}{x^{2}+1}}={\frac {\pi }{2}}.} == Relationship to other transforms == === Laplace–Stieltjes transform === The (unilateral) Laplace–Stieltjes transform of a function g : ℝ → ℝ is defined by the Lebesgue–Stieltjes integral { L ∗ g } ( s ) = ∫ 0 ∞ e − s t d g ( t ) . {\displaystyle \{{\mathcal {L}}^{*}g\}(s)=\int _{0}^{\infty }e^{-st}\,d\,g(t)~.} The function g is assumed to be of bounded variation. If g is the antiderivative of f: g ( x ) = ∫ 0 x f ( t ) d t {\displaystyle g(x)=\int _{0}^{x}f(t)\,d\,t} then the Laplace–Stieltjes transform of g and the Laplace transform of f coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function. === Fourier transform === Let f {\displaystyle f} be a complex-valued Lebesgue integrable function supported on [ 0 , ∞ ) {\displaystyle [0,\infty )} , and let F ( s ) = L f ( s ) {\displaystyle F(s)={\mathcal {L}}f(s)} be its Laplace transform. Then, within the region of convergence, we have F ( σ + i τ ) = ∫ 0 ∞ f ( t ) e − σ t e − i τ t d t , {\displaystyle F(\sigma +i\tau )=\int _{0}^{\infty }f(t)e^{-\sigma t}e^{-i\tau t}\,dt,} which is the Fourier transform of the function f ( t ) e − σ t {\displaystyle f(t)e^{-\sigma t}} . Indeed, the Fourier transform is a special case (under certain conditions) of the bilateral Laplace transform. The main difference is that the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. The Laplace transform is usually restricted to transformation of functions of t with t ≥ 0. A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable s. Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory. Formally, the Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument s = iω when the condition explained below is fulfilled, f ^ ( ω ) = F { f ( t ) } = L { f ( t ) } | s = i ω = F ( s ) | s = i ω = ∫ − ∞ ∞ e − i ω t f ( t ) d t . {\displaystyle {\begin{aligned}{\hat {f}}(\omega )&={\mathcal {F}}\{f(t)\}\\[4pt]&={\mathcal {L}}\{f(t)\}|_{s=i\omega }=F(s)|_{s=i\omega }\\[4pt]&=\int _{-\infty }^{\infty }e^{-i\omega t}f(t)\,dt~.\end{aligned}}} This convention of the Fourier transform ( f ^ 3 ( ω ) {\displaystyle {\hat {f}}_{3}(\omega )} in Fourier transform § Other conventions) requires a factor of ⁠1/2π⁠ on the inverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system. The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary axis, σ = 0. For example, the function f(t) = cos(ω0t) has a Laplace transform F(s) = s/(s2 + ω02) whose ROC is Re(s) > 0. As s = iω0 is a pole of F(s), substituting s = iω in F(s) does not yield the Fourier transform of f(t)u(t), which contains terms proportional to the Dirac delta functions δ(ω ± ω0). However, a relation of the form lim σ → 0 + F ( σ + i ω ) = f ^ ( ω ) {\displaystyle \lim _{\sigma \to 0^{+}}F(\sigma +i\omega )={\hat {f}}(\omega )} holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley–Wiener theorems. === Mellin transform === The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. If in the Mellin transform G ( s ) = M { g ( θ ) } = ∫ 0 ∞ θ s g ( θ ) d θ θ {\displaystyle G(s)={\mathcal {M}}\{g(\theta )\}=\int _{0}^{\infty }\theta ^{s}g(\theta )\,{\frac {d\theta }{\theta }}} we set θ = e−t we get a two-sided Laplace transform. === Z-transform === The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of z = d e f e s T , {\displaystyle z{\stackrel {\mathrm {def} }{{}={}}}e^{sT},} where T = 1/fs is the sampling interval (in units of time e.g., seconds) and fs is the sampling rate (in samples per second or hertz). Let Δ T ( t ) = d e f ∑ n = 0 ∞ δ ( t − n T ) {\displaystyle \Delta _{T}(t)\ {\stackrel {\mathrm {def} }{=}}\ \sum _{n=0}^{\infty }\delta (t-nT)} be a sampling impulse train (also called a Dirac comb) and x q ( t ) = d e f x ( t ) Δ T ( t ) = x ( t ) ∑ n = 0 ∞ δ ( t − n T ) = ∑ n = 0 ∞ x ( n T ) δ ( t − n T ) = ∑ n = 0 ∞ x [ n ] δ ( t − n T ) {\displaystyle {\begin{aligned}x_{q}(t)&{\stackrel {\mathrm {def} }{{}={}}}x(t)\Delta _{T}(t)=x(t)\sum _{n=0}^{\infty }\delta (t-nT)\\&=\sum _{n=0}^{\infty }x(nT)\delta (t-nT)=\sum _{n=0}^{\infty }x[n]\delta (t-nT)\end{aligned}}} be the sampled representation of the continuous-time x(t) x [ n ] = d e f x ( n T ) . {\displaystyle x[n]{\stackrel {\mathrm {def} }{{}={}}}x(nT)~.} The Laplace transform of the sampled signal xq(t) is X q ( s ) = ∫ 0 − ∞ x q ( t ) e − s t d t = ∫ 0 − ∞ ∑ n = 0 ∞ x [ n ] δ ( t − n T ) e − s t d t = ∑ n = 0 ∞ x [ n ] ∫ 0 − ∞ δ ( t − n T ) e − s t d t = ∑ n = 0 ∞ x [ n ] e − n s T . {\displaystyle {\begin{aligned}X_{q}(s)&=\int _{0^{-}}^{\infty }x_{q}(t)e^{-st}\,dt\\&=\int _{0^{-}}^{\infty }\sum _{n=0}^{\infty }x[n]\delta (t-nT)e^{-st}\,dt\\&=\sum _{n=0}^{\infty }x[n]\int _{0^{-}}^{\infty }\delta (t-nT)e^{-st}\,dt\\&=\sum _{n=0}^{\infty }x[n]e^{-nsT}~.\end{aligned}}} This is the precise definition of the unilateral Z-transform of the discrete function x[n] X ( z ) = ∑ n = 0 ∞ x [ n ] z − n {\displaystyle X(z)=\sum _{n=0}^{\infty }x[n]z^{-n}} with the substitution of z → esT. Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal, X q ( s ) = X ( z ) | z = e s T . {\displaystyle X_{q}(s)=X(z){\Big |}_{z=e^{sT}}.} The similarity between the Z- and Laplace transforms is expanded upon in the theory of time scale calculus. === Borel transform === The integral form of the Borel transform F ( s ) = ∫ 0 ∞ f ( z ) e − s z d z {\displaystyle F(s)=\int _{0}^{\infty }f(z)e^{-sz}\,dz} is a special case of the Laplace transform for f an entire function of exponential type, meaning that | f ( z ) | ≤ A e B | z | {\displaystyle |f(z)|\leq Ae^{B|z|}} for some constants A and B. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined. === Fundamental relationships === Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms. == Table of selected Laplace transforms == The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory Notes at the end of the table. Because the Laplace transform is a linear operator, The Laplace transform of a sum is the sum of Laplace transforms of each term. L { f ( t ) + g ( t ) } = L { f ( t ) } + L { g ( t ) } {\displaystyle {\mathcal {L}}\{f(t)+g(t)\}={\mathcal {L}}\{f(t)\}+{\mathcal {L}}\{g(t)\}} The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function. L { a f ( t ) } = a L { f ( t ) } {\displaystyle {\mathcal {L}}\{af(t)\}=a{\mathcal {L}}\{f(t)\}} Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly. The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems. == s-domain equivalent circuits and impedances == The Laplace transform is often used in circuit analysis, and simple conversions to the s-domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances. Here is a summary of equivalents: Note that the resistor is exactly the same in the time domain and the s-domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the s-domain account for that. The equivalents for current and voltage sources are simply derived from the transformations in the table above. == Examples and applications == The Laplace transform is used frequently in engineering and physics; the output of a linear time-invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory. The Laplace transform is invertible on a large class of functions. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications. The Laplace transform can also be used to solve differential equations and is used extensively in mechanical engineering and electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus. === Evaluating improper integrals === Let L { f ( t ) } = F ( s ) {\displaystyle {\mathcal {L}}\left\{f(t)\right\}=F(s)} . Then (see the table above) ∂ s L { f ( t ) t } = ∂ s ∫ 0 ∞ f ( t ) t e − s t d t = − ∫ 0 ∞ f ( t ) e − s t d t = − F ( s ) {\displaystyle \partial _{s}{\mathcal {L}}\left\{{\frac {f(t)}{t}}\right\}=\partial _{s}\int _{0}^{\infty }{\frac {f(t)}{t}}e^{-st}\,dt=-\int _{0}^{\infty }f(t)e^{-st}dt=-F(s)} From which one gets: L { f ( t ) t } = ∫ s ∞ F ( p ) d p . {\displaystyle {\mathcal {L}}\left\{{\frac {f(t)}{t}}\right\}=\int _{s}^{\infty }F(p)\,dp.} In the limit s → 0 {\displaystyle s\rightarrow 0} , one gets ∫ 0 ∞ f ( t ) t d t = ∫ 0 ∞ F ( p ) d p , {\displaystyle \int _{0}^{\infty }{\frac {f(t)}{t}}\,dt=\int _{0}^{\infty }F(p)\,dp,} provided that the interchange of limits can be justified. This is often possible as a consequence of the final value theorem. Even when the interchange cannot be justified the calculation can be suggestive. For example, with a ≠ 0 ≠ b, proceeding formally one has ∫ 0 ∞ cos ⁡ ( a t ) − cos ⁡ ( b t ) t d t = ∫ 0 ∞ ( p p 2 + a 2 − p p 2 + b 2 ) d p = [ 1 2 ln ⁡ p 2 + a 2 p 2 + b 2 ] 0 ∞ = 1 2 ln ⁡ b 2 a 2 = ln ⁡ | b a | . {\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\cos(at)-\cos(bt)}{t}}\,dt&=\int _{0}^{\infty }\left({\frac {p}{p^{2}+a^{2}}}-{\frac {p}{p^{2}+b^{2}}}\right)\,dp\\[6pt]&=\left[{\frac {1}{2}}\ln {\frac {p^{2}+a^{2}}{p^{2}+b^{2}}}\right]_{0}^{\infty }={\frac {1}{2}}\ln {\frac {b^{2}}{a^{2}}}=\ln \left|{\frac {b}{a}}\right|.\end{aligned}}} The validity of this identity can be proved by other means. It is an example of a Frullani integral. Another example is Dirichlet integral. === Complex impedance of a capacitor === In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (with equations as for the SI unit system). Symbolically, this is expressed by the differential equation i = C d v d t , {\displaystyle i=C{dv \over dt},} where C is the capacitance of the capacitor, i = i(t) is the electric current through the capacitor as a function of time, and v = v(t) is the voltage across the terminals of the capacitor, also as a function of time. Taking the Laplace transform of this equation, we obtain I ( s ) = C ( s V ( s ) − V 0 ) , {\displaystyle I(s)=C(sV(s)-V_{0}),} where I ( s ) = L { i ( t ) } , V ( s ) = L { v ( t ) } , {\displaystyle {\begin{aligned}I(s)&={\mathcal {L}}\{i(t)\},\\V(s)&={\mathcal {L}}\{v(t)\},\end{aligned}}} and V 0 = v ( 0 ) . {\displaystyle V_{0}=v(0).} Solving for V(s) we have V ( s ) = I ( s ) s C + V 0 s . {\displaystyle V(s)={I(s) \over sC}+{V_{0} \over s}.} The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex current I while holding the initial state V0 at zero: Z ( s ) = V ( s ) I ( s ) | V 0 = 0 . {\displaystyle Z(s)=\left.{V(s) \over I(s)}\right|_{V_{0}=0}.} Using this definition and the previous equation, we find: Z ( s ) = 1 s C , {\displaystyle Z(s)={\frac {1}{sC}},} which is the correct expression for the complex impedance of a capacitor. In addition, the Laplace transform has large applications in control theory. === Impulse response === Consider a linear time-invariant system with transfer function H ( s ) = 1 ( s + α ) ( s + β ) . {\displaystyle H(s)={\frac {1}{(s+\alpha )(s+\beta )}}.} The impulse response is simply the inverse Laplace transform of this transfer function: h ( t ) = L − 1 { H ( s ) } . {\displaystyle h(t)={\mathcal {L}}^{-1}\{H(s)\}.} Partial fraction expansion To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion, 1 ( s + α ) ( s + β ) = P s + α + R s + β . {\displaystyle {\frac {1}{(s+\alpha )(s+\beta )}}={P \over s+\alpha }+{R \over s+\beta }.} The unknown constants P and R are the residues located at the corresponding poles of the transfer function. Each residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue P, we multiply both sides of the equation by s + α to get 1 s + β = P + R ( s + α ) s + β . {\displaystyle {\frac {1}{s+\beta }}=P+{R(s+\alpha ) \over s+\beta }.} Then by letting s = −α, the contribution from R vanishes and all that is left is P = 1 s + β | s = − α = 1 β − α . {\displaystyle P=\left.{1 \over s+\beta }\right|_{s=-\alpha }={1 \over \beta -\alpha }.} Similarly, the residue R is given by R = 1 s + α | s = − β = 1 α − β . {\displaystyle R=\left.{1 \over s+\alpha }\right|_{s=-\beta }={1 \over \alpha -\beta }.} Note that R = − 1 β − α = − P {\displaystyle R={-1 \over \beta -\alpha }=-P} and so the substitution of R and P into the expanded expression for H(s) gives H ( s ) = ( 1 β − α ) ⋅ ( 1 s + α − 1 s + β ) . {\displaystyle H(s)=\left({\frac {1}{\beta -\alpha }}\right)\cdot \left({1 \over s+\alpha }-{1 \over s+\beta }\right).} Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain h ( t ) = L − 1 { H ( s ) } = 1 β − α ( e − α t − e − β t ) , {\displaystyle h(t)={\mathcal {L}}^{-1}\{H(s)\}={\frac {1}{\beta -\alpha }}\left(e^{-\alpha t}-e^{-\beta t}\right),} which is the impulse response of the system. Convolution The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions 1/(s + α) and 1/(s + β). That is, the inverse of H ( s ) = 1 ( s + α ) ( s + β ) = 1 s + α ⋅ 1 s + β {\displaystyle H(s)={\frac {1}{(s+\alpha )(s+\beta )}}={\frac {1}{s+\alpha }}\cdot {\frac {1}{s+\beta }}} is L − 1 { 1 s + α } ∗ L − 1 { 1 s + β } = e − α t ∗ e − β t = ∫ 0 t e − α x e − β ( t − x ) d x = e − α t − e − β t β − α . {\displaystyle {\mathcal {L}}^{-1}\!\left\{{\frac {1}{s+\alpha }}\right\}*{\mathcal {L}}^{-1}\!\left\{{\frac {1}{s+\beta }}\right\}=e^{-\alpha t}*e^{-\beta t}=\int _{0}^{t}e^{-\alpha x}e^{-\beta (t-x)}\,dx={\frac {e^{-\alpha t}-e^{-\beta t}}{\beta -\alpha }}.} === Phase delay === Starting with the Laplace transform, X ( s ) = s sin ⁡ ( φ ) + ω cos ⁡ ( φ ) s 2 + ω 2 {\displaystyle X(s)={\frac {s\sin(\varphi )+\omega \cos(\varphi )}{s^{2}+\omega ^{2}}}} we find the inverse by first rearranging terms in the fraction: X ( s ) = s sin ⁡ ( φ ) s 2 + ω 2 + ω cos ⁡ ( φ ) s 2 + ω 2 = sin ⁡ ( φ ) ( s s 2 + ω 2 ) + cos ⁡ ( φ ) ( ω s 2 + ω 2 ) . {\displaystyle {\begin{aligned}X(s)&={\frac {s\sin(\varphi )}{s^{2}+\omega ^{2}}}+{\frac {\omega \cos(\varphi )}{s^{2}+\omega ^{2}}}\\&=\sin(\varphi )\left({\frac {s}{s^{2}+\omega ^{2}}}\right)+\cos(\varphi )\left({\frac {\omega }{s^{2}+\omega ^{2}}}\right).\end{aligned}}} We are now able to take the inverse Laplace transform of our terms: x ( t ) = sin ⁡ ( φ ) L − 1 { s s 2 + ω 2 } + cos ⁡ ( φ ) L − 1 { ω s 2 + ω 2 } = sin ⁡ ( φ ) cos ⁡ ( ω t ) + cos ⁡ ( φ ) sin ⁡ ( ω t ) . {\displaystyle {\begin{aligned}x(t)&=\sin(\varphi ){\mathcal {L}}^{-1}\left\{{\frac {s}{s^{2}+\omega ^{2}}}\right\}+\cos(\varphi ){\mathcal {L}}^{-1}\left\{{\frac {\omega }{s^{2}+\omega ^{2}}}\right\}\\&=\sin(\varphi )\cos(\omega t)+\cos(\varphi )\sin(\omega t).\end{aligned}}} This is just the sine of the sum of the arguments, yielding: x ( t ) = sin ⁡ ( ω t + φ ) . {\displaystyle x(t)=\sin(\omega t+\varphi ).} We can apply similar logic to find that L − 1 { s cos ⁡ φ − ω sin ⁡ φ s 2 + ω 2 } = cos ⁡ ( ω t + φ ) . {\displaystyle {\mathcal {L}}^{-1}\left\{{\frac {s\cos \varphi -\omega \sin \varphi }{s^{2}+\omega ^{2}}}\right\}=\cos {(\omega t+\varphi )}.} === Statistical mechanics === In statistical mechanics, the Laplace transform of the density of states g ( E ) {\displaystyle g(E)} defines the partition function. That is, the canonical partition function Z ( β ) {\displaystyle Z(\beta )} is given by Z ( β ) = ∫ 0 ∞ e − β E g ( E ) d E {\displaystyle Z(\beta )=\int _{0}^{\infty }e^{-\beta E}g(E)\,dE} and the inverse is given by g ( E ) = 1 2 π i ∫ β 0 − i ∞ β 0 + i ∞ e β E Z ( β ) d β {\displaystyle g(E)={\frac {1}{2\pi i}}\int _{\beta _{0}-i\infty }^{\beta _{0}+i\infty }e^{\beta E}Z(\beta )\,d\beta } === Spatial (not time) structure from astronomical spectrum === The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain). Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum. When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement. === Birth and death processes === Consider a random walk, with steps { + 1 , − 1 } {\displaystyle \{+1,-1\}} occurring with probabilities p , q = 1 − p {\displaystyle p,q=1-p} . Suppose also that the time step is an Poisson process, with parameter λ {\displaystyle \lambda } . Then the probability of the walk being at the lattice point n {\displaystyle n} at time t {\displaystyle t} is P n ( t ) = ∫ 0 t λ e − λ ( t − s ) ( p P n − 1 ( s ) + q P n + 1 ( s ) ) d s ( + e − λ t when n = 0 ) . {\displaystyle P_{n}(t)=\int _{0}^{t}\lambda e^{-\lambda (t-s)}(pP_{n-1}(s)+qP_{n+1}(s))\,ds\quad (+e^{-\lambda t}\quad {\text{when}}\ n=0).} This leads to a system of integral equations (or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for π n ( s ) = L ( P n ) ( s ) , {\displaystyle \pi _{n}(s)={\mathcal {L}}(P_{n})(s),} namely: π n ( s ) = λ λ + s ( p π n − 1 ( s ) + q π n + 1 ( s ) ) ( + 1 λ + s when n = 0 ) {\displaystyle \pi _{n}(s)={\frac {\lambda }{\lambda +s}}(p\pi _{n-1}(s)+q\pi _{n+1}(s))\quad (+{\frac {1}{\lambda +s}}\quad {\text{when}}\ n=0)} which may now be solved by standard methods. === Tauberian theory === The Laplace transform of the measure μ {\displaystyle \mu } on [ 0 , ∞ ) {\displaystyle [0,\infty )} is given by L μ ( s ) = ∫ 0 ∞ e − s t d μ ( t ) . {\displaystyle {\mathcal {L}}\mu (s)=\int _{0}^{\infty }e^{-st}d\mu (t).} It is intuitively clear that, for small s > 0 {\displaystyle s>0} , the exponentially decaying integrand will become more sensitive to the concentration of the measure μ {\displaystyle \mu } on larger subsets of the domain. To make this more precise, introduce the distribution function: M ( t ) = μ ( [ 0 , t ) ) . {\displaystyle M(t)=\mu ([0,t)).} Formally, we expect a limit of the following kind: lim s → 0 + L μ ( s ) = lim t → ∞ M ( t ) . {\displaystyle \lim _{s\to 0^{+}}{\mathcal {L}}\mu (s)=\lim _{t\to \infty }M(t).} Tauberian theorems are theorems relating the asymptotics of the Laplace transform, as s → 0 + {\displaystyle s\to 0^{+}} , to those of the distribution of μ {\displaystyle \mu } as t → ∞ {\displaystyle t\to \infty } . They are thus of importance in asymptotic formulae of probability and statistics, where often the spectral side has asymptotics that are simpler to infer. Two Tauberian theorems of note are the Hardy–Littlewood Tauberian theorem and Wiener's Tauberian theorem. The Wiener theorem generalizes the Ikehara Tauberian theorem, which is the following statement: Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that f ( s ) = ∫ 0 ∞ A ( x ) e − x s d x {\displaystyle f(s)=\int _{0}^{\infty }A(x)e^{-xs}\,dx} converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c, f ( s ) − c s − 1 {\displaystyle f(s)-{\frac {c}{s-1}}} has an extension as a continuous function for ℜ(s) ≥ 1. Then the limit as x goes to infinity of e−x A(x) is equal to c. This statement can be applied in particular to the logarithmic derivative of Riemann zeta function, and thus provides an extremely short way to prove the prime number theorem. == See also == == Notes == == References == === Modern === Bracewell, Ronald N. (1978), The Fourier Transform and its Applications (2nd ed.), McGraw-Hill Kogakusha, ISBN 978-0-07-007013-4 Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill, ISBN 978-0-07-116043-8 Feller, William (1971), An introduction to probability theory and its applications. Vol. II., Second edition, New York: John Wiley & Sons, MR 0270403 Korn, G. A.; Korn, T. M. (1967), Mathematical Handbook for Scientists and Engineers (2nd ed.), McGraw-Hill Companies, ISBN 978-0-07-035370-1 Widder, David Vernon (1941), The Laplace Transform, Princeton Mathematical Series, v. 6, Princeton University Press, MR 0005923 Williams, J. (1973), Laplace Transforms, Problem Solvers, George Allen & Unwin, ISBN 978-0-04-512021-5 Takacs, J. (1953), "Fourier amplitudok meghatarozasa operatorszamitassal", Magyar Hiradastechnika (in Hungarian), IV (7–8): 93–96 === Historical === Euler, L. (1744), "De constructione aequationum" [The Construction of Equations], Opera Omnia, 1st series (in Latin), 22: 150–161 Euler, L. (1753), "Methodus aequationes differentiales" [A Method for Solving Differential Equations], Opera Omnia, 1st series (in Latin), 22: 181–213 Euler, L. (1992) [1769], "Institutiones calculi integralis, Volume 2" [Institutions of Integral Calculus], Opera Omnia, 1st series (in Latin), 12, Basel: Birkhäuser, ISBN 978-3764314743, Chapters 3–5 Euler, Leonhard (1769), Institutiones calculi integralis [Institutions of Integral Calculus] (in Latin), vol. II, Paris: Petropoli, ch. 3–5, pp. 57–153 Grattan-Guinness, I (1997), "Laplace's integral solutions to partial differential equations", in Gillispie, C. C. (ed.), Pierre Simon Laplace 1749–1827: A Life in Exact Science, Princeton: Princeton University Press, ISBN 978-0-691-01185-1 Lagrange, J. L. (1773), Mémoire sur l'utilité de la méthode, Œuvres de Lagrange, vol. 2, pp. 171–234 == Further reading == Arendt, Wolfgang; Batty, Charles J.K.; Hieber, Matthias; Neubrander, Frank (2002), Vector-Valued Laplace Transforms and Cauchy Problems, Birkhäuser Basel, ISBN 978-3-7643-6549-3. Davies, Brian (2002), Integral transforms and their applications (Third ed.), New York: Springer, ISBN 978-0-387-95314-4 Deakin, M. A. B. (1981), "The development of the Laplace transform", Archive for History of Exact Sciences, 25 (4): 343–390, doi:10.1007/BF01395660, S2CID 117913073 Deakin, M. A. B. (1982), "The development of the Laplace transform", Archive for History of Exact Sciences, 26 (4): 351–381, doi:10.1007/BF00418754, S2CID 123071842 Doetsch, Gustav (1974), Introduction to the Theory and Application of the Laplace Transformation, Springer, ISBN 978-0-387-06407-9 Mathews, Jon; Walker, Robert L. (1970), Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, ISBN 0-8053-7002-1 Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press, ISBN 978-0-8493-2876-3 Schwartz, Laurent (1952), "Transformation de Laplace des distributions", Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French), 1952: 196–206, MR 0052555 Schwartz, Laurent (2008) [1966], Mathematics for the Physical Sciences, Dover Books on Mathematics, New York: Dover Publications, pp. 215–241, ISBN 978-0-486-46662-0 - See Chapter VI. The Laplace transform. Siebert, William McC. (1986), Circuits, Signals, and Systems, Cambridge, Massachusetts: MIT Press, ISBN 978-0-262-19229-3 Widder, David Vernon (1945), "What is the Laplace transform?", The American Mathematical Monthly, 52 (8): 419–425, doi:10.2307/2305640, ISSN 0002-9890, JSTOR 2305640, MR 0013447 J.A.C.Weidman and Bengt Fornberg: "Fully numerical Laplace transform methods", Numerical Algorithms, vol.92 (2023), pp. 985–1006. https://doi.org/10.1007/s11075-022-01368-x . == External links == "Laplace transform", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Online Computation of the transform or inverse transform, wims.unice.fr Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. Weisstein, Eric W. "Laplace Transform". MathWorld. Good explanations of the initial and final value theorems Archived 2009-01-08 at the Wayback Machine Laplace Transforms at MathPages Computational Knowledge Engine allows to easily calculate Laplace Transforms and its inverse Transform. Laplace Calculator to calculate Laplace Transforms online easily. Code to visualize Laplace Transforms and many example videos.
Wikipedia/Laplace_transform
In physics, Newtonian dynamics (also known as Newtonian mechanics) is the study of the dynamics of a particle or a small body according to Newton's laws of motion. == Mathematical generalizations == Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space, which is flat. However, in mathematics Newton's laws of motion can be generalized to multidimensional and curved spaces. Often the term Newtonian dynamics is narrowed to Newton's second law m a = F {\displaystyle \displaystyle m\,\mathbf {a} =\mathbf {F} } . == Newton's second law in a multidimensional space == Consider N {\displaystyle \displaystyle N} particles with masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} in the regular three-dimensional Euclidean space. Let r 1 , … , r N {\displaystyle \displaystyle \mathbf {r} _{1},\,\ldots ,\,\mathbf {r} _{N}} be their radius-vectors in some inertial coordinate system. Then the motion of these particles is governed by Newton's second law applied to each of them The three-dimensional radius-vectors r 1 , … , r N {\displaystyle \displaystyle \mathbf {r} _{1},\,\ldots ,\,\mathbf {r} _{N}} can be built into a single n = 3 N {\displaystyle \displaystyle n=3N} -dimensional radius-vector. Similarly, three-dimensional velocity vectors v 1 , … , v N {\displaystyle \displaystyle \mathbf {v} _{1},\,\ldots ,\,\mathbf {v} _{N}} can be built into a single n = 3 N {\displaystyle \displaystyle n=3N} -dimensional velocity vector: In terms of the multidimensional vectors (2) the equations (1) are written as i.e. they take the form of Newton's second law applied to a single particle with the unit mass m = 1 {\displaystyle \displaystyle m=1} . Definition. The equations (3) are called the equations of a Newtonian dynamical system in a flat multidimensional Euclidean space, which is called the configuration space of this system. Its points are marked by the radius-vector r {\displaystyle \displaystyle \mathbf {r} } . The space whose points are marked by the pair of vectors ( r , v ) {\displaystyle \displaystyle (\mathbf {r} ,\mathbf {v} )} is called the phase space of the dynamical system (3). == Euclidean structure == The configuration space and the phase space of the dynamical system (3) both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass m = 1 {\displaystyle \displaystyle m=1} is equal to the sum of kinetic energies of the three-dimensional particles with the masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} : == Constraints and internal coordinates == In some cases the motion of the particles with the masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} can be constrained. Typical constraints look like scalar equations of the form Constraints of the form (5) are called holonomic and scleronomic. In terms of the radius-vector r {\displaystyle \displaystyle \mathbf {r} } of the Newtonian dynamical system (3) they are written as Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system (3). Therefore, the constrained system has n = 3 N − K {\displaystyle \displaystyle n=3\,N-K} degrees of freedom. Definition. The constraint equations (6) define an n {\displaystyle \displaystyle n} -dimensional manifold M {\displaystyle \displaystyle M} within the configuration space of the Newtonian dynamical system (3). This manifold M {\displaystyle \displaystyle M} is called the configuration space of the constrained system. Its tangent bundle T M {\displaystyle \displaystyle TM} is called the phase space of the constrained system. Let q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} be the internal coordinates of a point of M {\displaystyle \displaystyle M} . Their usage is typical for the Lagrangian mechanics. The radius-vector r {\displaystyle \displaystyle \mathbf {r} } is expressed as some definite function of q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} : The vector-function (7) resolves the constraint equations (6) in the sense that upon substituting (7) into (6) the equations (6) are fulfilled identically in q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} . == Internal presentation of the velocity vector == The velocity vector of the constrained Newtonian dynamical system is expressed in terms of the partial derivatives of the vector-function (7): The quantities q ˙ 1 , … , q ˙ n {\displaystyle \displaystyle {\dot {q}}^{1},\,\ldots ,\,{\dot {q}}^{n}} are called internal components of the velocity vector. Sometimes they are denoted with the use of a separate symbol and then treated as independent variables. The quantities are used as internal coordinates of a point of the phase space T M {\displaystyle \displaystyle TM} of the constrained Newtonian dynamical system. == Embedding and the induced Riemannian metric == Geometrically, the vector-function (7) implements an embedding of the configuration space M {\displaystyle \displaystyle M} of the constrained Newtonian dynamical system into the 3 N {\displaystyle \displaystyle 3\,N} -dimensional flat configuration space of the unconstrained Newtonian dynamical system (3). Due to this embedding the Euclidean structure of the ambient space induces the Riemannian metric onto the manifold M {\displaystyle \displaystyle M} . The components of the metric tensor of this induced metric are given by the formula where ( , ) {\displaystyle \displaystyle (\ ,\ )} is the scalar product associated with the Euclidean structure (4). == Kinetic energy of a constrained Newtonian dynamical system == Since the Euclidean structure of an unconstrained system of N {\displaystyle \displaystyle N} particles is introduced through their kinetic energy, the induced Riemannian structure on the configuration space N {\displaystyle \displaystyle N} of a constrained system preserves this relation to the kinetic energy: The formula (12) is derived by substituting (8) into (4) and taking into account (11). == Constraint forces == For a constrained Newtonian dynamical system the constraints described by the equations (6) are usually implemented by some mechanical framework. This framework produces some auxiliary forces including the force that maintains the system within its configuration manifold M {\displaystyle \displaystyle M} . Such a maintaining force is perpendicular to M {\displaystyle \displaystyle M} . It is called the normal force. The force F {\displaystyle \displaystyle \mathbf {F} } from (6) is subdivided into two components The first component in (13) is tangent to the configuration manifold M {\displaystyle \displaystyle M} . The second component is perpendicular to M {\displaystyle \displaystyle M} . In coincides with the normal force N {\displaystyle \displaystyle \mathbf {N} } . Like the velocity vector (8), the tangent force F ∥ {\displaystyle \displaystyle \mathbf {F} _{\parallel }} has its internal presentation The quantities F 1 , … , F n {\displaystyle F^{1},\,\ldots ,\,F^{n}} in (14) are called the internal components of the force vector. == Newton's second law in a curved space == The Newtonian dynamical system (3) constrained to the configuration manifold M {\displaystyle \displaystyle M} by the constraint equations (6) is described by the differential equations where Γ i j s {\displaystyle \Gamma _{ij}^{s}} are Christoffel symbols of the metric connection produced by the Riemannian metric (11). == Relation to Lagrange equations == Mechanical systems with constraints are usually described by Lagrange equations: where T = T ( q 1 , … , q n , w 1 , … , w n ) {\displaystyle T=T(q^{1},\ldots ,q^{n},w^{1},\ldots ,w^{n})} is the kinetic energy the constrained dynamical system given by the formula (12). The quantities Q 1 , … , Q n {\displaystyle Q_{1},\,\ldots ,\,Q_{n}} in (16) are the inner covariant components of the tangent force vector F ∥ {\displaystyle \mathbf {F} _{\parallel }} (see (13) and (14)). They are produced from the inner contravariant components F 1 , … , F n {\displaystyle F^{1},\,\ldots ,\,F^{n}} of the vector F ∥ {\displaystyle \mathbf {F} _{\parallel }} by means of the standard index lowering procedure using the metric (11): The equations (16) are equivalent to the equations (15). However, the metric (11) and other geometric features of the configuration manifold M {\displaystyle \displaystyle M} are not explicit in (16). The metric (11) can be recovered from the kinetic energy T {\displaystyle \displaystyle T} by means of the formula == See also == Modified Newtonian dynamics == References ==
Wikipedia/Newtonian_dynamics
Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions.: 13–15  Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are also considered. == Variations on a simple Gaussian integral == === Gaussian integral === The first integral, with broad application outside of quantum field theory, is the Gaussian integral. G ≡ ∫ − ∞ ∞ e − 1 2 x 2 d x {\displaystyle G\equiv \int _{-\infty }^{\infty }e^{-{1 \over 2}x^{2}}\,dx} In physics the factor of 1/2 in the argument of the exponential is common. Note that, if we let r = x 2 + y 2 {\displaystyle r={\sqrt {x^{2}+y^{2}}}} be the radius, then we can use the usual polar coordinate change of variables (which in particular renders d x d y = r d r d θ {\displaystyle dx\,dy=r\,dr\,d\theta } ) to get G 2 = ( ∫ − ∞ ∞ e − 1 2 x 2 d x ) ⋅ ( ∫ − ∞ ∞ e − 1 2 y 2 d y ) = 2 π ∫ 0 ∞ r e − 1 2 r 2 d r = 2 π ∫ 0 ∞ e − w d w = 2 π . {\displaystyle G^{2}=\left(\int _{-\infty }^{\infty }e^{-{1 \over 2}x^{2}}\,dx\right)\cdot \left(\int _{-\infty }^{\infty }e^{-{1 \over 2}y^{2}}\,dy\right)=2\pi \int _{0}^{\infty }re^{-{1 \over 2}r^{2}}\,dr=2\pi \int _{0}^{\infty }e^{-w}\,dw=2\pi .} Thus we obtain ∫ − ∞ ∞ e − 1 2 x 2 d x = 2 π . {\displaystyle \int _{-\infty }^{\infty }e^{-{1 \over 2}x^{2}}\,dx={\sqrt {2\pi }}.} === Slight generalization of the Gaussian integral === ∫ − ∞ ∞ e − 1 2 a x 2 d x = 2 π a {\displaystyle \int _{-\infty }^{\infty }e^{-{1 \over 2}ax^{2}}\,dx={\sqrt {2\pi \over a}}} where we have scaled x → x a . {\displaystyle x\to {x \over {\sqrt {a}}}.} === Integrals of exponents and even powers of x === ∫ − ∞ ∞ x 2 e − 1 2 a x 2 d x = − 2 d d a ∫ − ∞ ∞ e − 1 2 a x 2 d x = − 2 d d a ( 2 π a ) 1 2 = ( 2 π a ) 1 2 1 a {\displaystyle \int _{-\infty }^{\infty }x^{2}e^{-{1 \over 2}ax^{2}}\,dx=-2{d \over da}\int _{-\infty }^{\infty }e^{-{1 \over 2}ax^{2}}\,dx=-2{d \over da}\left({2\pi \over a}\right)^{1 \over 2}=\left({2\pi \over a}\right)^{1 \over 2}{1 \over a}} and ∫ − ∞ ∞ x 4 e − 1 2 a x 2 d x = ( − 2 d d a ) ( − 2 d d a ) ∫ − ∞ ∞ e − 1 2 a x 2 d x = ( − 2 d d a ) ( − 2 d d a ) ( 2 π a ) 1 2 = ( 2 π a ) 1 2 3 a 2 {\displaystyle \int _{-\infty }^{\infty }x^{4}e^{-{1 \over 2}ax^{2}}\,dx=\left(-2{d \over da}\right)\left(-2{d \over da}\right)\int _{-\infty }^{\infty }e^{-{1 \over 2}ax^{2}}\,dx=\left(-2{d \over da}\right)\left(-2{d \over da}\right)\left({2\pi \over a}\right)^{1 \over 2}=\left({2\pi \over a}\right)^{1 \over 2}{3 \over a^{2}}} In general ∫ − ∞ ∞ x 2 n e − 1 2 a x 2 d x = ( 2 π a ) 1 2 1 a n ( 2 n − 1 ) ( 2 n − 3 ) ⋯ 5 ⋅ 3 ⋅ 1 = ( 2 π a ) 1 2 1 a n ( 2 n − 1 ) ! ! {\displaystyle \int _{-\infty }^{\infty }x^{2n}e^{-{1 \over 2}ax^{2}}\,dx=\left({2\pi \over a}\right)^{1 \over {2}}{1 \over a^{n}}\left(2n-1\right)\left(2n-3\right)\cdots 5\cdot 3\cdot 1=\left({2\pi \over a}\right)^{1 \over {2}}{1 \over a^{n}}\left(2n-1\right)!!} Note that the integrals of exponents and odd powers of x are 0, due to odd symmetry. === Integrals with a linear term in the argument of the exponent === ∫ − ∞ ∞ exp ⁡ ( − 1 2 a x 2 + J x ) d x {\displaystyle \int _{-\infty }^{\infty }\exp \left(-{\frac {1}{2}}ax^{2}+Jx\right)dx} This integral can be performed by completing the square: ( − 1 2 a x 2 + J x ) = − 1 2 a ( x 2 − 2 J x a + J 2 a 2 − J 2 a 2 ) = − 1 2 a ( x − J a ) 2 + J 2 2 a {\displaystyle \left(-{1 \over 2}ax^{2}+Jx\right)=-{1 \over 2}a\left(x^{2}-{2Jx \over a}+{J^{2} \over a^{2}}-{J^{2} \over a^{2}}\right)=-{1 \over 2}a\left(x-{J \over a}\right)^{2}+{J^{2} \over 2a}} Therefore: ∫ − ∞ ∞ exp ⁡ ( − 1 2 a x 2 + J x ) d x = exp ⁡ ( J 2 2 a ) ∫ − ∞ ∞ exp ⁡ [ − 1 2 a ( x − J a ) 2 ] d x = exp ⁡ ( J 2 2 a ) ∫ − ∞ ∞ exp ⁡ ( − 1 2 a w 2 ) d w = ( 2 π a ) 1 2 exp ⁡ ( J 2 2 a ) {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }\exp \left(-{1 \over 2}ax^{2}+Jx\right)\,dx&=\exp \left({J^{2} \over 2a}\right)\int _{-\infty }^{\infty }\exp \left[-{1 \over 2}a\left(x-{J \over a}\right)^{2}\right]\,dx\\[8pt]&=\exp \left({J^{2} \over 2a}\right)\int _{-\infty }^{\infty }\exp \left(-{1 \over 2}aw^{2}\right)\,dw\\[8pt]&=\left({2\pi \over a}\right)^{1 \over 2}\exp \left({J^{2} \over 2a}\right)\end{aligned}}} === Integrals with an imaginary linear term in the argument of the exponent === The integral ∫ − ∞ ∞ exp ⁡ ( − 1 2 a x 2 + i J x ) d x = ( 2 π a ) 1 2 exp ⁡ ( − J 2 2 a ) {\displaystyle \int _{-\infty }^{\infty }\exp \left(-{1 \over 2}ax^{2}+iJx\right)dx=\left({2\pi \over a}\right)^{1 \over 2}\exp \left(-{J^{2} \over 2a}\right)} is proportional to the Fourier transform of the Gaussian where J is the conjugate variable of x. By again completing the square we see that the Fourier transform of a Gaussian is also a Gaussian, but in the conjugate variable. The larger a is, the narrower the Gaussian in x and the wider the Gaussian in J. This is a demonstration of the uncertainty principle. This integral is also known as the Hubbard–Stratonovich transformation used in field theory. === Integrals with a complex argument of the exponent === The integral of interest is (for an example of an application see Relation between Schrödinger's equation and the path integral formulation of quantum mechanics) ∫ − ∞ ∞ exp ⁡ ( 1 2 i a x 2 + i J x ) d x . {\displaystyle \int _{-\infty }^{\infty }\exp \left({1 \over 2}iax^{2}+iJx\right)dx.} We now assume that a and J may be complex. Completing the square ( 1 2 i a x 2 + i J x ) = 1 2 i a ( x 2 + 2 J x a + ( J a ) 2 − ( J a ) 2 ) = − 1 2 a i ( x + J a ) 2 − i J 2 2 a . {\displaystyle \left({1 \over 2}iax^{2}+iJx\right)={1 \over 2}ia\left(x^{2}+{2Jx \over a}+\left({J \over a}\right)^{2}-\left({J \over a}\right)^{2}\right)=-{1 \over 2}{a \over i}\left(x+{J \over a}\right)^{2}-{iJ^{2} \over 2a}.} By analogy with the previous integrals ∫ − ∞ ∞ exp ⁡ ( 1 2 i a x 2 + i J x ) d x = ( 2 π i a ) 1 2 exp ⁡ ( − i J 2 2 a ) . {\displaystyle \int _{-\infty }^{\infty }\exp \left({1 \over 2}iax^{2}+iJx\right)dx=\left({2\pi i \over a}\right)^{1 \over 2}\exp \left({-iJ^{2} \over 2a}\right).} This result is valid as an integration in the complex plane as long as a is non-zero and has a semi-positive imaginary part. See Fresnel integral. == Gaussian integrals in higher dimensions == The one-dimensional integrals can be generalized to multiple dimensions. ∫ exp ⁡ ( − 1 2 x ⋅ A ⋅ x + J ⋅ x ) d n x = ( 2 π ) n det A exp ⁡ ( 1 2 J ⋅ A − 1 ⋅ J ) {\displaystyle \int \exp \left(-{\frac {1}{2}}x\cdot A\cdot x+J\cdot x\right)d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({1 \over 2}J\cdot A^{-1}\cdot J\right)} Here A is a real positive definite symmetric matrix. This integral is performed by diagonalization of A with an orthogonal transformation D = O − 1 A O = O T A O {\displaystyle D=O^{-1}AO=O^{\text{T}}AO} where D is a diagonal matrix and O is an orthogonal matrix. This decouples the variables and allows the integration to be performed as n one-dimensional integrations. This is best illustrated with a two-dimensional example. === Example: Simple Gaussian integration in two dimensions === The Gaussian integral in two dimensions is ∫ exp ⁡ ( − 1 2 A i j x i x j ) d 2 x = ( 2 π ) 2 det A {\displaystyle \int \exp \left(-{\frac {1}{2}}A_{ij}x^{i}x^{j}\right)d^{2}x={\sqrt {\frac {(2\pi )^{2}}{\det A}}}} where A is a two-dimensional symmetric matrix with components specified as A = [ a c c b ] {\displaystyle A={\begin{bmatrix}a&c\\c&b\end{bmatrix}}} and we have used the Einstein summation convention. ==== Diagonalize the matrix ==== The first step is to diagonalize the matrix. Note that A i j x i x j ≡ x T A x = x T ( O O T ) A ( O O T ) x = ( x T O ) ( O T A O ) ( O T x ) {\displaystyle A_{ij}x^{i}x^{j}\equiv x^{\text{T}}Ax=x^{\text{T}}\left(OO^{\text{T}}\right)A\left(OO^{\text{T}}\right)x=\left(x^{\text{T}}O\right)\left(O^{\text{T}}AO\right)\left(O^{\text{T}}x\right)} where, since A is a real symmetric matrix, we can choose O to be orthogonal, and hence also a unitary matrix. O can be obtained from the eigenvectors of A. We choose O such that: D ≡ OTAO is diagonal. ===== Eigenvalues of A ===== To find the eigenvectors of A one first finds the eigenvalues λ of A given by [ a c c b ] [ u v ] = λ [ u v ] . {\displaystyle {\begin{bmatrix}a&c\\c&b\end{bmatrix}}{\begin{bmatrix}u\\v\end{bmatrix}}=\lambda {\begin{bmatrix}u\\v\end{bmatrix}}.} The eigenvalues are solutions of the characteristic polynomial ( a − λ ) ( b − λ ) − c 2 = 0 {\displaystyle (a-\lambda )(b-\lambda )-c^{2}=0} λ 2 − λ ( a + b ) + a b − c 2 = 0 , {\displaystyle \lambda ^{2}-\lambda (a+b)+ab-c^{2}=0,} which are found using the quadratic equation: λ ± = 1 2 ( a + b ) ± 1 2 ( a + b ) 2 − 4 ( a b − c 2 ) . = 1 2 ( a + b ) ± 1 2 a 2 + 2 a b + b 2 − 4 a b + 4 c 2 . = 1 2 ( a + b ) ± 1 2 ( a − b ) 2 + 4 c 2 . {\displaystyle {\begin{aligned}\lambda _{\pm }&={\tfrac {1}{2}}(a+b)\pm {\tfrac {1}{2}}{\sqrt {(a+b)^{2}-4(ab-c^{2})}}.\\&={\tfrac {1}{2}}(a+b)\pm {\tfrac {1}{2}}{\sqrt {a^{2}+2ab+b^{2}-4ab+4c^{2}}}.\\&={\tfrac {1}{2}}(a+b)\pm {\tfrac {1}{2}}{\sqrt {(a-b)^{2}+4c^{2}}}.\end{aligned}}} ===== Eigenvectors of A ===== Substitution of the eigenvalues back into the eigenvector equation yields v = − ( a − λ ± ) u c , v = − c u ( b − λ ± ) . {\displaystyle v=-{\left(a-\lambda _{\pm }\right)u \over c},\qquad v=-{cu \over \left(b-\lambda _{\pm }\right)}.} From the characteristic equation we know a − λ ± c = c b − λ ± . {\displaystyle {a-\lambda _{\pm } \over c}={c \over b-\lambda _{\pm }}.} Also note a − λ ± c = − b − λ ∓ c . {\displaystyle {a-\lambda _{\pm } \over c}=-{b-\lambda _{\mp } \over c}.} The eigenvectors can be written as: [ 1 η − a − λ − c η ] , [ − b − λ + c η 1 η ] {\displaystyle {\begin{bmatrix}{\frac {1}{\eta }}\\[1ex]-{\frac {a-\lambda _{-}}{c\eta }}\end{bmatrix}},\qquad {\begin{bmatrix}-{\frac {b-\lambda _{+}}{c\eta }}\\[1ex]{\frac {1}{\eta }}\end{bmatrix}}} for the two eigenvectors. Here η is a normalizing factor given by, η = 1 + ( a − λ − c ) 2 = 1 + ( b − λ + c ) 2 . {\displaystyle \eta ={\sqrt {1+\left({\frac {a-\lambda _{-}}{c}}\right)^{2}}}={\sqrt {1+\left({\frac {b-\lambda _{+}}{c}}\right)^{2}}}.} It is easily verified that the two eigenvectors are orthogonal to each other. ===== Construction of the orthogonal matrix ===== The orthogonal matrix is constructed by assigning the normalized eigenvectors as columns in the orthogonal matrix O = [ 1 η − b − λ + c η − a − λ − c η 1 η ] . {\displaystyle O={\begin{bmatrix}{\frac {1}{\eta }}&-{\frac {b-\lambda _{+}}{c\eta }}\\-{\frac {a-\lambda _{-}}{c\eta }}&{\frac {1}{\eta }}\end{bmatrix}}.} Note that det(O) = 1. If we define sin ⁡ ( θ ) = − a − λ − c η {\displaystyle \sin(\theta )=-{\frac {a-\lambda _{-}}{c\eta }}} then the orthogonal matrix can be written O = [ cos ⁡ ( θ ) − sin ⁡ ( θ ) sin ⁡ ( θ ) cos ⁡ ( θ ) ] {\displaystyle O={\begin{bmatrix}\cos(\theta )&-\sin(\theta )\\\sin(\theta )&\cos(\theta )\end{bmatrix}}} which is simply a rotation of the eigenvectors with the inverse: O − 1 = O T = [ cos ⁡ ( θ ) sin ⁡ ( θ ) − sin ⁡ ( θ ) cos ⁡ ( θ ) ] . {\displaystyle O^{-1}=O^{\text{T}}={\begin{bmatrix}\cos(\theta )&\sin(\theta )\\-\sin(\theta )&\cos(\theta )\end{bmatrix}}.} ===== Diagonal matrix ===== The diagonal matrix becomes D = O T A O = [ λ − 0 0 λ + ] {\displaystyle D=O^{\text{T}}AO={\begin{bmatrix}\lambda _{-}&0\\[1ex]0&\lambda _{+}\end{bmatrix}}} with eigenvectors [ 1 0 ] , [ 0 1 ] {\displaystyle {\begin{bmatrix}1\\0\end{bmatrix}},\qquad {\begin{bmatrix}0\\1\end{bmatrix}}} ===== Numerical example ===== A = [ 2 1 1 1 ] {\displaystyle A={\begin{bmatrix}2&1\\1&1\end{bmatrix}}} The eigenvalues are λ ± = 3 2 ± 5 2 . {\displaystyle \lambda _{\pm }={3 \over 2}\pm {{\sqrt {5}} \over 2}.} The eigenvectors are 1 η [ 1 − 1 2 − 5 2 ] , 1 η [ 1 2 + 5 2 1 ] {\displaystyle {1 \over \eta }{\begin{bmatrix}1\\[1ex]-{1 \over 2}-{{\sqrt {5}} \over 2}\end{bmatrix}},\qquad {1 \over \eta }{\begin{bmatrix}{1 \over 2}+{{\sqrt {5}} \over 2}\\[1ex]1\end{bmatrix}}} where η = 5 2 + 5 2 . {\displaystyle \eta ={\sqrt {{5 \over 2}+{{\sqrt {5}} \over 2}}}.} Then O = [ 1 η 1 η ( 1 2 + 5 2 ) 1 η ( − 1 2 − 5 2 ) 1 η ] O − 1 = [ 1 η 1 η ( − 1 2 − 5 2 ) 1 η ( 1 2 + 5 2 ) 1 η ] {\displaystyle {\begin{aligned}O&={\begin{bmatrix}{\frac {1}{\eta }}&{\frac {1}{\eta }}\left({1 \over 2}+{{\sqrt {5}} \over 2}\right)\\{\frac {1}{\eta }}\left(-{1 \over 2}-{{\sqrt {5}} \over 2}\right)&{1 \over \eta }\end{bmatrix}}\\O^{-1}&={\begin{bmatrix}{\frac {1}{\eta }}&{\frac {1}{\eta }}\left(-{1 \over 2}-{{\sqrt {5}} \over 2}\right)\\{\frac {1}{\eta }}\left({1 \over 2}+{{\sqrt {5}} \over 2}\right)&{\frac {1}{\eta }}\end{bmatrix}}\end{aligned}}} The diagonal matrix becomes D = O T A O = [ λ − 0 0 λ + ] = [ 3 2 − 5 2 0 0 3 2 + 5 2 ] {\displaystyle D=O^{\text{T}}AO={\begin{bmatrix}\lambda _{-}&0\\0&\lambda _{+}\end{bmatrix}}={\begin{bmatrix}{3 \over 2}-{{\sqrt {5}} \over 2}&0\\0&{3 \over 2}+{{\sqrt {5}} \over 2}\end{bmatrix}}} with eigenvectors [ 1 0 ] , [ 0 1 ] {\displaystyle {\begin{bmatrix}1\\0\end{bmatrix}},\qquad {\begin{bmatrix}0\\1\end{bmatrix}}} ==== Rescale the variables and integrate ==== With the diagonalization the integral can be written ∫ exp ⁡ ( − 1 2 x T A x ) d 2 x = ∫ exp ⁡ ( − 1 2 ∑ j = 1 2 λ j y j 2 ) d 2 y {\displaystyle \int \exp \left(-{\frac {1}{2}}x^{\text{T}}Ax\right)d^{2}x=\int \exp \left(-{\frac {1}{2}}\sum _{j=1}^{2}\lambda _{j}y_{j}^{2}\right)\,d^{2}y} where y = O T x . {\displaystyle y=O^{\text{T}}x.} Since the coordinate transformation is simply a rotation of coordinates the Jacobian determinant of the transformation is one yielding d 2 y = d 2 x {\displaystyle d^{2}y=d^{2}x} The integrations can now be performed: ∫ exp ⁡ ( − 1 2 x T A x ) d 2 x = ∫ exp ⁡ ( − 1 2 ∑ j = 1 2 λ j y j 2 ) d 2 y = ∏ j = 1 2 ( 2 π λ j ) 1 / 2 = ( ( 2 π ) 2 ∏ j = 1 2 λ j ) 1 / 2 = ( ( 2 π ) 2 det ( O − 1 A O ) ) 1 / 2 = ( ( 2 π ) 2 det ( A ) ) 1 / 2 {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}x^{\mathsf {T}}Ax\right)d^{2}x={}&\int \exp \left(-{\frac {1}{2}}\sum _{j=1}^{2}\lambda _{j}y_{j}^{2}\right)d^{2}y\\[1ex]={}&\prod _{j=1}^{2}\left({2\pi \over \lambda _{j}}\right)^{1/2}\\={}&\left({(2\pi )^{2} \over \prod _{j=1}^{2}\lambda _{j}}\right)^{1/2}\\[1ex]={}&\left({(2\pi )^{2} \over \det {\left(O^{-1}AO\right)}}\right)^{1/2}\\[1ex]={}&\left({(2\pi )^{2} \over \det {\left(A\right)}}\right)^{1/2}\end{aligned}}} which is the advertised solution. === Integrals with complex and linear terms in multiple dimensions === With the two-dimensional example it is now easy to see the generalization to the complex plane and to multiple dimensions. ==== Integrals with a linear term in the argument ==== ∫ exp ⁡ ( − 1 2 x T ⋅ A ⋅ x + J T ⋅ x ) d x = ( 2 π ) n det A exp ⁡ ( 1 2 J T ⋅ A − 1 ⋅ J ) {\displaystyle \int \exp \left(-{\frac {1}{2}}x^{T}\cdot A\cdot x+J^{T}\cdot x\right)dx={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({1 \over 2}J^{T}\cdot A^{-1}\cdot J\right)} ==== Integrals with an imaginary linear term ==== ∫ exp ⁡ ( − 1 2 x T ⋅ A ⋅ x + i J T ⋅ x ) d x = ( 2 π ) n det A exp ⁡ ( − 1 2 J T ⋅ A − 1 ⋅ J ) {\displaystyle \int \exp \left(-{\frac {1}{2}}x^{T}\cdot A\cdot x+iJ^{T}\cdot x\right)dx={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left(-{1 \over 2}J^{T}\cdot A^{-1}\cdot J\right)} ==== Integrals with a complex quadratic term ==== ∫ exp ⁡ ( i 2 x T ⋅ A ⋅ x + i J T ⋅ x ) d x = ( 2 π i ) n det A exp ⁡ ( − i 2 J T ⋅ A − 1 ⋅ J ) {\displaystyle \int \exp \left({\frac {i}{2}}x^{T}\cdot A\cdot x+iJ^{T}\cdot x\right)dx={\sqrt {\frac {(2\pi i)^{n}}{\det A}}}\exp \left(-{i \over 2}J^{T}\cdot A^{-1}\cdot J\right)} === Integrals with differential operators in the argument === As an example consider the integral: 21‒22  ∫ exp ⁡ [ ∫ d 4 x ( − 1 2 φ A ^ φ + J φ ) ] D φ {\displaystyle \int \exp \left[\int d^{4}x\left(-{\frac {1}{2}}\varphi {\hat {A}}\varphi +J\varphi \right)\right]D\varphi } where A ^ {\displaystyle {\hat {A}}} is a differential operator with φ {\displaystyle \varphi } and J functions of spacetime, and D φ {\displaystyle D\varphi } indicates integration over all possible paths. In analogy with the matrix version of this integral the solution is ∫ exp ⁡ [ ∫ d 4 x ( − 1 2 φ A ^ φ + J φ ) ] D φ ∝ exp ⁡ ( 1 2 ∫ d 4 x d 4 y J ( x ) D ( x − y ) J ( y ) ) {\displaystyle \int \exp \left[\int d^{4}x\left(-{\frac {1}{2}}\varphi {\hat {A}}\varphi +J\varphi \right)\right]D\varphi \;\propto \;\exp \left({1 \over 2}\int d^{4}x\;d^{4}yJ(x)D(x-y)J(y)\right)} where A ^ D ( x − y ) = δ 4 ( x − y ) {\displaystyle {\hat {A}}D(x-y)=\delta ^{4}(x-y)} and D(x − y), called the propagator, is the inverse of A ^ {\displaystyle {\hat {A}}} , and δ 4 ( x − y ) {\displaystyle \delta ^{4}(x-y)} is the Dirac delta function. Similar arguments yield ∫ exp ⁡ [ ∫ d 4 x ( − 1 2 φ A ^ φ + i J φ ) ] D φ ∝ exp ⁡ ( − 1 2 ∫ d 4 x d 4 y J ( x ) D ( x − y ) J ( y ) ) , {\displaystyle \int \exp \left[\int d^{4}x\left(-{\frac {1}{2}}\varphi {\hat {A}}\varphi +iJ\varphi \right)\right]D\varphi \;\propto \;\exp \left(-{1 \over 2}\int d^{4}x\;d^{4}yJ(x)D(x-y)J(y)\right),} and ∫ exp ⁡ [ i ∫ d 4 x ( 1 2 φ A ^ φ + J φ ) ] D φ ∝ exp ⁡ ( − i 2 ∫ d 4 x d 4 y J ( x ) D ( x − y ) J ( y ) ) . {\displaystyle \int \exp \left[i\int d^{4}x\left({\frac {1}{2}}\varphi {\hat {A}}\varphi +J\varphi \right)\right]D\varphi \;\propto \;\exp \left(-{i \over 2}\int d^{4}x\;d^{4}yJ(x)D(x-y)J(y)\right).} See Path-integral formulation of virtual-particle exchange for an application of this integral. == Integrals that can be approximated by the method of steepest descent == In quantum field theory n-dimensional integrals of the form ∫ − ∞ ∞ exp ⁡ ( − 1 ℏ f ( q ) ) d n q {\displaystyle \int _{-\infty }^{\infty }\exp \left(-{1 \over \hbar }f(q)\right)d^{n}q} appear often. Here ℏ {\displaystyle \hbar } is the reduced Planck constant and f is a function with a positive minimum at q = q 0 {\displaystyle q=q_{0}} . These integrals can be approximated by the method of steepest descent. For small values of the Planck constant, f can be expanded about its minimum ∫ − ∞ ∞ exp ⁡ [ − 1 ℏ ( f ( q 0 ) + 1 2 ( q − q 0 ) 2 f ′ ′ ( q − q 0 ) + ⋯ ) ] d n q . {\displaystyle \int _{-\infty }^{\infty }\exp \left[-{1 \over \hbar }\left(f\left(q_{0}\right)+{1 \over 2}\left(q-q_{0}\right)^{2}f^{\prime \prime }\left(q-q_{0}\right)+\cdots \right)\right]d^{n}q.} Here f ′ ′ {\displaystyle f^{\prime \prime }} is the n by n matrix of second derivatives evaluated at the minimum of the function. If we neglect higher order terms this integral can be integrated explicitly. ∫ − ∞ ∞ exp ⁡ [ − 1 ℏ ( f ( q ) ) ] d n q ≈ exp ⁡ [ − 1 ℏ ( f ( q 0 ) ) ] ( 2 π ℏ ) n det f ′ ′ . {\displaystyle \int _{-\infty }^{\infty }\exp \left[-{1 \over \hbar }(f(q))\right]d^{n}q\approx \exp \left[-{1 \over \hbar }\left(f\left(q_{0}\right)\right)\right]{\sqrt {(2\pi \hbar )^{n} \over \det f^{\prime \prime }}}.} == Integrals that can be approximated by the method of stationary phase == A common integral is a path integral of the form ∫ exp ⁡ ( i ℏ S ( q , q ˙ ) ) D q {\displaystyle \int \exp \left({i \over \hbar }S\left(q,{\dot {q}}\right)\right)Dq} where S ( q , q ˙ ) {\displaystyle S\left(q,{\dot {q}}\right)} is the classical action and the integral is over all possible paths that a particle may take. In the limit of small ℏ {\displaystyle \hbar } the integral can be evaluated in the stationary phase approximation. In this approximation the integral is over the path in which the action is a minimum. Therefore, this approximation recovers the classical limit of mechanics. == Fourier integrals == === Dirac delta distribution === The Dirac delta distribution in spacetime can be written as a Fourier transform: 23  ∫ d 4 k ( 2 π ) 4 exp ⁡ ( i k ( x − y ) ) = δ 4 ( x − y ) . {\displaystyle \int {\frac {d^{4}k}{(2\pi )^{4}}}\exp(ik(x-y))=\delta ^{4}(x-y).} In general, for any dimension N {\displaystyle N} ∫ d N k ( 2 π ) N exp ⁡ ( i k ( x − y ) ) = δ N ( x − y ) . {\displaystyle \int {\frac {d^{N}k}{(2\pi )^{N}}}\exp(ik(x-y))=\delta ^{N}(x-y).} === Fourier integrals of forms of the Coulomb potential === ==== Laplacian of 1/r ==== While not an integral, the identity in three-dimensional Euclidean space − 1 4 π ∇ 2 ( 1 r ) = δ ( r ) {\displaystyle -{1 \over 4\pi }\nabla ^{2}\left({1 \over r}\right)=\delta \left(\mathbf {r} \right)} where r 2 = r ⋅ r {\displaystyle r^{2}=\mathbf {r} \cdot \mathbf {r} } is a consequence of Gauss's theorem and can be used to derive integral identities. For an example see Longitudinal and transverse vector fields. This identity implies that the Fourier integral representation of 1/r is ∫ d 3 k ( 2 π ) 3 exp ⁡ ( i k ⋅ r ) k 2 = 1 4 π r . {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}{\exp \left(i\mathbf {k} \cdot \mathbf {r} \right) \over k^{2}}={1 \over 4\pi r}.} ==== Yukawa potential: the Coulomb potential with mass ==== The Yukawa potential in three dimensions can be represented as an integral over a Fourier transform: 26, 29  ∫ d 3 k ( 2 π ) 3 exp ⁡ ( i k ⋅ r ) k 2 + m 2 = e − m r 4 π r {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}{\exp \left(i\mathbf {k} \cdot \mathbf {r} \right) \over k^{2}+m^{2}}={e^{-mr} \over 4\pi r}} where r 2 = r ⋅ r , k 2 = k ⋅ k . {\displaystyle r^{2}=\mathbf {r} \cdot \mathbf {r} ,\qquad k^{2}=\mathbf {k} \cdot \mathbf {k} .} See Static forces and virtual-particle exchange for an application of this integral. In the small m limit the integral reduces to ⁠1/4πr⁠. To derive this result note: ∫ d 3 k ( 2 π ) 3 exp ⁡ ( i k ⋅ r ) k 2 + m 2 = ∫ 0 ∞ k 2 d k ( 2 π ) 2 ∫ − 1 1 d u e i k r u k 2 + m 2 = 2 r ∫ 0 ∞ k d k ( 2 π ) 2 sin ⁡ ( k r ) k 2 + m 2 = 1 i r ∫ − ∞ ∞ k d k ( 2 π ) 2 e i k r k 2 + m 2 = 1 i r ∫ − ∞ ∞ k d k ( 2 π ) 2 e i k r ( k + i m ) ( k − i m ) = 1 i r 2 π i ( 2 π ) 2 i m 2 i m e − m r = 1 4 π r e − m r {\displaystyle {\begin{aligned}\int {\frac {d^{3}k}{(2\pi )^{3}}}{\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}={}&\int _{0}^{\infty }{\frac {k^{2}dk}{(2\pi )^{2}}}\int _{-1}^{1}du{e^{ikru} \over k^{2}+m^{2}}\\[1ex]={}&{2 \over r}\int _{0}^{\infty }{\frac {kdk}{(2\pi )^{2}}}{\sin(kr) \over k^{2}+m^{2}}\\[1ex]={}&{1 \over ir}\int _{-\infty }^{\infty }{\frac {kdk}{(2\pi )^{2}}}{e^{ikr} \over k^{2}+m^{2}}\\[1ex]={}&{1 \over ir}\int _{-\infty }^{\infty }{\frac {kdk}{(2\pi )^{2}}}{e^{ikr} \over (k+im)(k-im)}\\[1ex]={}&{1 \over ir}{\frac {2\pi i}{(2\pi )^{2}}}{\frac {im}{2im}}e^{-mr}\\[1ex]={}&{\frac {1}{4\pi r}}e^{-mr}\end{aligned}}} ==== Modified Coulomb potential with mass ==== ∫ d 3 k ( 2 π ) 3 ( k ^ ⋅ r ^ ) 2 exp ⁡ ( i k ⋅ r ) k 2 + m 2 = e − m r 4 π r [ 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) ] {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}\left(\mathbf {\hat {k}} \cdot \mathbf {\hat {r}} \right)^{2}{\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}={\frac {e^{-mr}}{4\pi r}}\left[1+{\frac {2}{mr}}-{\frac {2}{(mr)^{2}}}\left(e^{mr}-1\right)\right]} where the hat indicates a unit vector in three dimensional space. The derivation of this result is as follows: ∫ d 3 k ( 2 π ) 3 ( k ^ ⋅ r ^ ) 2 exp ⁡ ( i k ⋅ r ) k 2 + m 2 = ∫ 0 ∞ k 2 d k ( 2 π ) 2 ∫ − 1 1 d u u 2 e i k r u k 2 + m 2 = 2 ∫ 0 ∞ k 2 d k ( 2 π ) 2 1 k 2 + m 2 [ 1 k r sin ⁡ ( k r ) + 2 ( k r ) 2 cos ⁡ ( k r ) − 2 ( k r ) 3 sin ⁡ ( k r ) ] = e − m r 4 π r [ 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) ] {\displaystyle {\begin{aligned}&\int {\frac {d^{3}k}{(2\pi )^{3}}}\left(\mathbf {\hat {k}} \cdot \mathbf {\hat {r}} \right)^{2}{\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}\\[1ex]&=\int _{0}^{\infty }{\frac {k^{2}dk}{(2\pi )^{2}}}\int _{-1}^{1}du\ u^{2}{\frac {e^{ikru}}{k^{2}+m^{2}}}\\[1ex]&=2\int _{0}^{\infty }{\frac {k^{2}dk}{(2\pi )^{2}}}{\frac {1}{k^{2}+m^{2}}}\left[{\frac {1}{kr}}\sin(kr)+{\frac {2}{(kr)^{2}}}\cos(kr)-{\frac {2}{(kr)^{3}}}\sin(kr)\right]\\[1ex]&={\frac {e^{-mr}}{4\pi r}}\left[1+{\frac {2}{mr}}-{\frac {2}{(mr)^{2}}}\left(e^{mr}-1\right)\right]\end{aligned}}} Note that in the small m limit the integral goes to the result for the Coulomb potential since the term in the brackets goes to 1. ==== Longitudinal potential with mass ==== ∫ d 3 k ( 2 π ) 3 k ^ k ^ exp ⁡ ( i k ⋅ r ) k 2 + m 2 = 1 2 e − m r 4 π r ( [ 1 − r ^ r ^ ] + { 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) } [ 1 + r ^ r ^ ] ) {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}\mathbf {\hat {k}} \mathbf {\hat {k}} {\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}={1 \over 2}{\frac {e^{-mr}}{4\pi r}}\left(\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right]+\left\{1+{\frac {2}{mr}}-{2 \over (mr)^{2}}\left(e^{mr}-1\right)\right\}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right]\right)} where the hat indicates a unit vector in three dimensional space. The derivation for this result is as follows: ∫ d 3 k ( 2 π ) 3 k ^ k ^ exp ⁡ ( i k ⋅ r ) k 2 + m 2 = ∫ d 3 k ( 2 π ) 3 [ ( k ^ ⋅ r ^ ) 2 r ^ r ^ + ( k ^ ⋅ θ ^ ) 2 θ ^ θ ^ + ( k ^ ⋅ ϕ ^ ) 2 ϕ ^ ϕ ^ ] exp ⁡ ( i k ⋅ r ) k 2 + m 2 = e − m r 4 π r { 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) } { 1 − 1 2 [ 1 − r ^ r ^ ] } + ∫ 0 ∞ k 2 d k ( 2 π ) 2 ∫ − 1 1 d u e i k r u k 2 + m 2 1 2 [ 1 − r ^ r ^ ] = 1 2 e − m r 4 π r [ 1 − r ^ r ^ ] + e − m r 4 π r { 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) } { 1 2 [ 1 + r ^ r ^ ] } = 1 2 e − m r 4 π r ( [ 1 − r ^ r ^ ] + { 1 + 2 m r − 2 ( m r ) 2 ( e m r − 1 ) } [ 1 + r ^ r ^ ] ) {\displaystyle {\begin{aligned}&\int {\frac {d^{3}k}{(2\pi )^{3}}}\mathbf {\hat {k}} \mathbf {\hat {k}} {\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}\\[1ex]&=\int {\frac {d^{3}k}{(2\pi )^{3}}}\left[\left(\mathbf {\hat {k}} \cdot \mathbf {\hat {r}} \right)^{2}\mathbf {\hat {r}} \mathbf {\hat {r}} +\left(\mathbf {\hat {k}} \cdot \mathbf {\hat {\theta }} \right)^{2}\mathbf {\hat {\theta }} \mathbf {\hat {\theta }} +\left(\mathbf {\hat {k}} \cdot \mathbf {\hat {\phi }} \right)^{2}\mathbf {\hat {\phi }} \mathbf {\hat {\phi }} \right]{\frac {\exp \left(i\mathbf {k} \cdot \mathbf {r} \right)}{k^{2}+m^{2}}}\\[1ex]&={\frac {e^{-mr}}{4\pi r}}\left\{1+{\frac {2}{mr}}-{2 \over (mr)^{2}}\left(e^{mr}-1\right)\right\}\left\{\mathbf {1} -{1 \over 2}\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right]\right\}+\int _{0}^{\infty }{\frac {k^{2}dk}{(2\pi )^{2}}}\int _{-1}^{1}du{\frac {e^{ikru}}{k^{2}+m^{2}}}{1 \over 2}\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right]\\[1ex]&={1 \over 2}{\frac {e^{-mr}}{4\pi r}}\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right]+{e^{-mr} \over 4\pi r}\left\{1+{\frac {2}{mr}}-{2 \over (mr)^{2}}\left(e^{mr}-1\right)\right\}\left\{{1 \over 2}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right]\right\}\\[1ex]&={1 \over 2}{\frac {e^{-mr}}{4\pi r}}\left(\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right]+\left\{1+{\frac {2}{mr}}-{2 \over (mr)^{2}}\left(e^{mr}-1\right)\right\}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right]\right)\end{aligned}}} Note that in the small m limit the integral reduces to 1 2 1 4 π r [ 1 − r ^ r ^ ] . {\displaystyle {1 \over 2}{1 \over 4\pi r}\left[\mathbf {1} -\mathbf {\hat {r}} \mathbf {\hat {r}} \right].} ==== Transverse potential with mass ==== ∫ d 3 k ( 2 π ) 3 [ 1 − k ^ k ^ ] exp ⁡ ( i k ⋅ r ) k 2 + m 2 = 1 2 e − m r 4 π r { 2 ( m r ) 2 ( e m r − 1 ) − 2 m r } [ 1 + r ^ r ^ ] {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}\left[\mathbf {1} -\mathbf {\hat {k}} \mathbf {\hat {k}} \right]{\exp \left(i\mathbf {k} \cdot \mathbf {r} \right) \over k^{2}+m^{2}}={1 \over 2}{e^{-mr} \over 4\pi r}\left\{{2 \over (mr)^{2}}\left(e^{mr}-1\right)-{2 \over mr}\right\}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right]} In the small mr limit the integral goes to 1 2 1 4 π r [ 1 + r ^ r ^ ] . {\displaystyle {1 \over 2}{1 \over 4\pi r}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right].} For large distance, the integral falls off as the inverse cube of r 1 4 π m 2 r 3 [ 1 + r ^ r ^ ] . {\displaystyle {\frac {1}{4\pi m^{2}r^{3}}}\left[\mathbf {1} +\mathbf {\hat {r}} \mathbf {\hat {r}} \right].} For applications of this integral see Darwin Lagrangian and Darwin interaction in a vacuum. ==== Angular integration in cylindrical coordinates ==== There are two important integrals. The angular integration of an exponential in cylindrical coordinates can be written in terms of Bessel functions of the first kind: 113  ∫ 0 2 π d φ 2 π exp ⁡ ( i p cos ⁡ ( φ ) ) = J 0 ( p ) {\displaystyle \int _{0}^{2\pi }{d\varphi \over 2\pi }\exp \left(ip\cos(\varphi )\right)=J_{0}(p)} and ∫ 0 2 π d φ 2 π cos ⁡ ( φ ) exp ⁡ ( i p cos ⁡ ( φ ) ) = i J 1 ( p ) . {\displaystyle \int _{0}^{2\pi }{d\varphi \over 2\pi }\cos(\varphi )\exp \left(ip\cos(\varphi )\right)=iJ_{1}(p).} For applications of these integrals see Magnetic interaction between current loops in a simple plasma or electron gas. == Bessel functions == === Integration of the cylindrical propagator with mass === ==== First power of a Bessel function ==== ∫ 0 ∞ k d k k 2 + m 2 J 0 ( k r ) = K 0 ( m r ) . {\displaystyle \int _{0}^{\infty }{k\;dk \over k^{2}+m^{2}}J_{0}\left(kr\right)=K_{0}(mr).} See Abramowitz and Stegun.: §11.4.44  For m r ≪ 1 {\displaystyle mr\ll 1} , we have: 116  K 0 ( m r ) → − ln ⁡ ( m r 2 ) + 0.5772. {\displaystyle K_{0}(mr)\to -\ln \left({mr \over 2}\right)+0.5772.} For an application of this integral see Two line charges embedded in a plasma or electron gas. ==== Squares of Bessel functions ==== The integration of the propagator in cylindrical coordinates is ∫ 0 ∞ k d k k 2 + m 2 J 1 2 ( k r ) = I 1 ( m r ) K 1 ( m r ) . {\displaystyle \int _{0}^{\infty }{k\;dk \over k^{2}+m^{2}}J_{1}^{2}(kr)=I_{1}(mr)K_{1}(mr).} For small mr the integral becomes ∫ o ∞ k d k k 2 + m 2 J 1 2 ( k r ) → 1 2 [ 1 − 1 8 ( m r ) 2 ] . {\displaystyle \int _{o}^{\infty }{k\;dk \over k^{2}+m^{2}}J_{1}^{2}(kr)\to {1 \over 2}\left[1-{1 \over 8}(mr)^{2}\right].} For large mr the integral becomes ∫ o ∞ k d k k 2 + m 2 J 1 2 ( k r ) → 1 2 ( 1 m r ) . {\displaystyle \int _{o}^{\infty }{k\;dk \over k^{2}+m^{2}}J_{1}^{2}(kr)\to {1 \over 2}\left({1 \over mr}\right).} For applications of this integral see Magnetic interaction between current loops in a simple plasma or electron gas. In general, ∫ 0 ∞ k d k k 2 + m 2 J ν 2 ( k r ) = I ν ( m r ) K ν ( m r ) ℜ ( ν ) > − 1. {\displaystyle \int _{0}^{\infty }{k\;dk \over k^{2}+m^{2}}J_{\nu }^{2}(kr)=I_{\nu }(mr)K_{\nu }(mr)\qquad \Re (\nu )>-1.} === Integration over a magnetic wave function === The two-dimensional integral over a magnetic wave function is: §11.4.28  2 a 2 n + 2 n ! ∫ 0 ∞ d r r 2 n + 1 exp ⁡ ( − a 2 r 2 ) J 0 ( k r ) = M ( n + 1 , 1 , − k 2 4 a 2 ) . {\displaystyle {2a^{2n+2} \over n!}\int _{0}^{\infty }{dr}\;r^{2n+1}\exp \left(-a^{2}r^{2}\right)J_{0}(kr)=M\left(n+1,1,-{k^{2} \over 4a^{2}}\right).} Here, M is a confluent hypergeometric function. For an application of this integral see Charge density spread over a wave function. == See also == Relation between Schrödinger's equation and the path integral formulation of quantum mechanics == References ==
Wikipedia/Common_integrals_in_quantum_field_theory
Calculus (from Latin calculus meaning ‘pebble’, plural calculī) in its most general sense is any method or system of calculation. Calculus may refer to: == Biology == Calculus (spider), a genus of the family Oonopidae Caseolus calculus, a genus and species of small land snails == Medicine == Calculus (dental), deposits of calcium phosphate salts on teeth, also known as tartar Calculus (medicine), a stone formed in the body such as a gall stone or kidney stone == Mathematics == Infinitesimal calculus (or simply calculus), which investigates motion and rates of change Differential calculus Integral calculus Non-standard calculus, an approach to infinitesimal calculus using Robinson's infinitesimals Calculus of sums and differences (difference operator), also called the finite-difference calculus, a discrete analogue of "calculus" Functional calculus, a way to apply various types of functions to operators Schubert calculus, a branch of algebraic geometry Tensor calculus (also called tensor analysis), a generalization of vector calculus that encompasses tensor fields Vector calculus (also called vector analysis), comprising specialized notations for multivariable analysis of vectors in an inner-product space Matrix calculus, a specialized notation for multivariable calculus over spaces of matrices Numerical calculus (also called numerical analysis), the study of numerical approximations Umbral calculus, the combinatorics of certain operations on polynomials The calculus of variations, a field of study that deals with extremizing functionals Itô calculus An extension of calculus to stochastic processes. == Logic == Logical calculus, a formal system that defines a language and rules to derive an expression from premises Propositional calculus, specifies the rules of inference governing the logic of propositions Predicate calculus, specifies the rules of inference governing the logic of predicates Proof calculus, a framework for expressing systems of logical inference Sequent calculus, a proof calculus for first-order logic Cirquent calculus, a proof calculus based on graph-style structures called cirquents Situation calculus, a framework for describing relations within a dynamic system Event calculus, a model for reasoning about events and their effects Fluent calculus, a model for describing relations within a dynamic system Calculus of relations, the manipulation of binary relations with the algebra of sets, composition of relations, and transpose relations Epsilon calculus, a logical language which replaces quantifiers with the epsilon operator Fitch-style calculus, a method for constructing formal proofs used in first-order logic Modal μ-calculus, a common temporal logic used by formal verification methods such as model checking == Physics == Bondi k-calculus, a method used in relativity theory Jones calculus, used in optics to describe polarized light Mueller calculus, used in optics to handle Stokes vectors, which describe the polarization of incoherent light Operational calculus, used to solve differential equations arising in electronics == Formal language == Lambda calculus, a formulation of the theory of reflexive functions that has deep connections to computational theory Kappa calculus, a reformulation of the first-order fragment of typed lambda calculus Rho calculus, introduced as a general means to uniformly integrate rewriting into lambda calculus Process calculus, a set of approaches to formulating formal models of concurrent systems Ambient calculus, a family of models for concurrent systems based on the concept of agent mobility Join calculus, a theoretical model for the design of distributed programming languages π-calculus, a formulation of the theory of concurrent, communicating processes Relational calculus, a calculus for the relational data model Domain relational calculus Tuple calculus Refinement calculus, a way of refining models of programs into efficient programs == Other meanings == a calculus (pl. calculi), a Roman counting token Battlefield calculus, military calculation of all known factors into the decision-making and action-planning process Calculus of negligence, a legal standard in U.S. tort law to determine if a duty of care has been breached Felicific calculus, a procedure to evaluate the benefit of an action, according to Bentham Professor Calculus, a fictional character in the comic-strip series The Adventures of Tintin == See also == List of calculus topics
Wikipedia/Calculus_(disambiguation)
The truncated Newton method, originated in a paper by Ron Dembo and Trond Steihaug, also known as Hessian-free optimization, are a family of optimization algorithms designed for optimizing non-linear functions with large numbers of independent variables. A truncated Newton method consists of repeated application of an iterative optimization algorithm to approximately solve Newton's equations, to determine an update to the function's parameters. The inner solver is truncated, i.e., run for only a limited number of iterations. It follows that, for truncated Newton methods to work, the inner solver needs to produce a good approximation in a finite number of iterations; conjugate gradient has been suggested and evaluated as a candidate inner loop. Another prerequisite is good preconditioning for the inner algorithm. == References == == Further reading == Grippo, L.; Lampariello, F.; Lucidi, S. (1989). "A Truncated Newton Method with Nonmonotone Line Search for Unconstrained Optimization". J. Optimization Theory and Applications. 60 (3): 401–419. CiteSeerX 10.1.1.455.7495. doi:10.1007/BF00940345. S2CID 18990650. Nash, Stephen G.; Nocedal, Jorge (1991). "A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization". SIAM J. Optim. 1 (3): 358–372. CiteSeerX 10.1.1.474.3400. doi:10.1137/0801023.
Wikipedia/Truncated_Newton_method
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller (indeed, infinitesimally smaller) than any relevant dimension of the body; so that its geometry and the constitutive properties of the material (such as density and stiffness) at each point of space can be assumed to be unchanged by the deformation. With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory, or small displacement-gradient theory. It is contrasted with the finite strain theory where the opposite assumption is made. The infinitesimal strain theory is commonly adopted in civil and mechanical engineering for the stress analysis of structures built from relatively stiff elastic materials like concrete and steel, since a common goal in the design of such structures is to minimize their deformation under typical loads. However, this approximation demands caution in the case of thin flexible bodies, such as rods, plates, and shells which are susceptible to significant rotations, thus making the results unreliable. == Infinitesimal strain tensor == For infinitesimal deformations of a continuum body, in which the displacement gradient tensor (2nd order tensor) is small compared to unity, i.e. ‖ ∇ u ‖ ≪ 1 {\displaystyle \|\nabla \mathbf {u} \|\ll 1} , it is possible to perform a geometric linearization of any one of the finite strain tensors used in finite strain theory, e.g. the Lagrangian finite strain tensor E {\displaystyle \mathbf {E} } , and the Eulerian finite strain tensor e {\displaystyle \mathbf {e} } . In such a linearization, the non-linear or second-order terms of the finite strain tensor are neglected. Thus we have E = 1 2 ( ∇ X u + ( ∇ X u ) T + ( ∇ X u ) T ∇ X u ) ≈ 1 2 ( ∇ X u + ( ∇ X u ) T ) {\displaystyle \mathbf {E} ={\frac {1}{2}}\left(\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\nabla _{\mathbf {X} }\mathbf {u} \right)\approx {\frac {1}{2}}\left(\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\right)} or E K L = 1 2 ( ∂ U K ∂ X L + ∂ U L ∂ X K + ∂ U M ∂ X K ∂ U M ∂ X L ) ≈ 1 2 ( ∂ U K ∂ X L + ∂ U L ∂ X K ) {\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial U_{K}}{\partial X_{L}}}+{\frac {\partial U_{L}}{\partial X_{K}}}+{\frac {\partial U_{M}}{\partial X_{K}}}{\frac {\partial U_{M}}{\partial X_{L}}}\right)\approx {\frac {1}{2}}\left({\frac {\partial U_{K}}{\partial X_{L}}}+{\frac {\partial U_{L}}{\partial X_{K}}}\right)} and e = 1 2 ( ∇ x u + ( ∇ x u ) T − ∇ x u ( ∇ x u ) T ) ≈ 1 2 ( ∇ x u + ( ∇ x u ) T ) {\displaystyle \mathbf {e} ={\frac {1}{2}}\left(\nabla _{\mathbf {x} }\mathbf {u} +(\nabla _{\mathbf {x} }\mathbf {u} )^{T}-\nabla _{\mathbf {x} }\mathbf {u} (\nabla _{\mathbf {x} }\mathbf {u} )^{T}\right)\approx {\frac {1}{2}}\left(\nabla _{\mathbf {x} }\mathbf {u} +(\nabla _{\mathbf {x} }\mathbf {u} )^{T}\right)} or e r s = 1 2 ( ∂ u r ∂ x s + ∂ u s ∂ x r − ∂ u k ∂ x r ∂ u k ∂ x s ) ≈ 1 2 ( ∂ u r ∂ x s + ∂ u s ∂ x r ) {\displaystyle e_{rs}={\frac {1}{2}}\left({\frac {\partial u_{r}}{\partial x_{s}}}+{\frac {\partial u_{s}}{\partial x_{r}}}-{\frac {\partial u_{k}}{\partial x_{r}}}{\frac {\partial u_{k}}{\partial x_{s}}}\right)\approx {\frac {1}{2}}\left({\frac {\partial u_{r}}{\partial x_{s}}}+{\frac {\partial u_{s}}{\partial x_{r}}}\right)} This linearization implies that the Lagrangian description and the Eulerian description are approximately the same as there is little difference in the material and spatial coordinates of a given material point in the continuum. Therefore, the material displacement gradient tensor components and the spatial displacement gradient tensor components are approximately equal. Thus we have E ≈ e ≈ ε = 1 2 ( ( ∇ u ) T + ∇ u ) {\displaystyle \mathbf {E} \approx \mathbf {e} \approx {\boldsymbol {\varepsilon }}={\frac {1}{2}}\left((\nabla \mathbf {u} )^{T}+\nabla \mathbf {u} \right)} or E K L ≈ e r s ≈ ε i j = 1 2 ( u i , j + u j , i ) {\displaystyle E_{KL}\approx e_{rs}\approx \varepsilon _{ij}={\frac {1}{2}}\left(u_{i,j}+u_{j,i}\right)} where ε i j {\displaystyle \varepsilon _{ij}} are the components of the infinitesimal strain tensor ε {\displaystyle {\boldsymbol {\varepsilon }}} , also called Cauchy's strain tensor, linear strain tensor, or small strain tensor. ε i j = 1 2 ( u i , j + u j , i ) = [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] = [ ∂ u 1 ∂ x 1 1 2 ( ∂ u 1 ∂ x 2 + ∂ u 2 ∂ x 1 ) 1 2 ( ∂ u 1 ∂ x 3 + ∂ u 3 ∂ x 1 ) 1 2 ( ∂ u 2 ∂ x 1 + ∂ u 1 ∂ x 2 ) ∂ u 2 ∂ x 2 1 2 ( ∂ u 2 ∂ x 3 + ∂ u 3 ∂ x 2 ) 1 2 ( ∂ u 3 ∂ x 1 + ∂ u 1 ∂ x 3 ) 1 2 ( ∂ u 3 ∂ x 2 + ∂ u 2 ∂ x 3 ) ∂ u 3 ∂ x 3 ] {\displaystyle {\begin{aligned}\varepsilon _{ij}&={\frac {1}{2}}\left(u_{i,j}+u_{j,i}\right)\\&={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\\\end{bmatrix}}\\&={\begin{bmatrix}{\frac {\partial u_{1}}{\partial x_{1}}}&{\frac {1}{2}}\left({\frac {\partial u_{1}}{\partial x_{2}}}+{\frac {\partial u_{2}}{\partial x_{1}}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{1}}{\partial x_{3}}}+{\frac {\partial u_{3}}{\partial x_{1}}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{2}}{\partial x_{1}}}+{\frac {\partial u_{1}}{\partial x_{2}}}\right)&{\frac {\partial u_{2}}{\partial x_{2}}}&{\frac {1}{2}}\left({\frac {\partial u_{2}}{\partial x_{3}}}+{\frac {\partial u_{3}}{\partial x_{2}}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{3}}{\partial x_{1}}}+{\frac {\partial u_{1}}{\partial x_{3}}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{3}}{\partial x_{2}}}+{\frac {\partial u_{2}}{\partial x_{3}}}\right)&{\frac {\partial u_{3}}{\partial x_{3}}}\\\end{bmatrix}}\end{aligned}}} or using different notation: [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ∂ u x ∂ x 1 2 ( ∂ u x ∂ y + ∂ u y ∂ x ) 1 2 ( ∂ u x ∂ z + ∂ u z ∂ x ) 1 2 ( ∂ u y ∂ x + ∂ u x ∂ y ) ∂ u y ∂ y 1 2 ( ∂ u y ∂ z + ∂ u z ∂ y ) 1 2 ( ∂ u z ∂ x + ∂ u x ∂ z ) 1 2 ( ∂ u z ∂ y + ∂ u y ∂ z ) ∂ u z ∂ z ] {\displaystyle {\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}{\frac {\partial u_{x}}{\partial x}}&{\frac {1}{2}}\left({\frac {\partial u_{x}}{\partial y}}+{\frac {\partial u_{y}}{\partial x}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{x}}{\partial z}}+{\frac {\partial u_{z}}{\partial x}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}\right)&{\frac {\partial u_{y}}{\partial y}}&{\frac {1}{2}}\left({\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{z}}{\partial y}}+{\frac {\partial u_{y}}{\partial z}}\right)&{\frac {\partial u_{z}}{\partial z}}\\\end{bmatrix}}} Furthermore, since the deformation gradient can be expressed as F = ∇ u + I {\displaystyle {\boldsymbol {F}}={\boldsymbol {\nabla }}\mathbf {u} +{\boldsymbol {I}}} where I {\displaystyle {\boldsymbol {I}}} is the second-order identity tensor, we have ε = 1 2 ( F T + F ) − I {\displaystyle {\boldsymbol {\varepsilon }}={\frac {1}{2}}\left({\boldsymbol {F}}^{T}+{\boldsymbol {F}}\right)-{\boldsymbol {I}}} Also, from the general expression for the Lagrangian and Eulerian finite strain tensors we have E ( m ) = 1 2 m ( U 2 m − I ) = 1 2 m [ ( F T F ) m − I ] ≈ 1 2 m [ { ∇ u + ( ∇ u ) T + I } m − I ] ≈ ε e ( m ) = 1 2 m ( V 2 m − I ) = 1 2 m [ ( F F T ) m − I ] ≈ ε {\displaystyle {\begin{aligned}\mathbf {E} _{(m)}&={\frac {1}{2m}}(\mathbf {U} ^{2m}-{\boldsymbol {I}})={\frac {1}{2m}}[({\boldsymbol {F}}^{T}{\boldsymbol {F}})^{m}-{\boldsymbol {I}}]\approx {\frac {1}{2m}}[\{{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}+{\boldsymbol {I}}\}^{m}-{\boldsymbol {I}}]\approx {\boldsymbol {\varepsilon }}\\\mathbf {e} _{(m)}&={\frac {1}{2m}}(\mathbf {V} ^{2m}-{\boldsymbol {I}})={\frac {1}{2m}}[({\boldsymbol {F}}{\boldsymbol {F}}^{T})^{m}-{\boldsymbol {I}}]\approx {\boldsymbol {\varepsilon }}\end{aligned}}} === Geometric derivation === Consider a two-dimensional deformation of an infinitesimal rectangular material element with dimensions d x {\displaystyle dx} by d y {\displaystyle dy} (Figure 1), which after deformation, takes the form of a rhombus. From the geometry of Figure 1 we have a b ¯ = ( d x + ∂ u x ∂ x d x ) 2 + ( ∂ u y ∂ x d x ) 2 = d x 1 + 2 ∂ u x ∂ x + ( ∂ u x ∂ x ) 2 + ( ∂ u y ∂ x ) 2 {\displaystyle {\begin{aligned}{\overline {ab}}&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&=dx{\sqrt {1+2{\frac {\partial u_{x}}{\partial x}}+\left({\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\\end{aligned}}} For very small displacement gradients, i.e., ‖ ∇ u ‖ ≪ 1 {\displaystyle \|\nabla \mathbf {u} \|\ll 1} , we have a b ¯ ≈ d x + ∂ u x ∂ x d x {\displaystyle {\overline {ab}}\approx dx+{\frac {\partial u_{x}}{\partial x}}dx} The normal strain in the x {\displaystyle x} -direction of the rectangular element is defined by ε x = a b ¯ − A B ¯ A B ¯ {\displaystyle \varepsilon _{x}={\frac {{\overline {ab}}-{\overline {AB}}}{\overline {AB}}}} and knowing that A B ¯ = d x {\displaystyle {\overline {AB}}=dx} , we have ε x = ∂ u x ∂ x {\displaystyle \varepsilon _{x}={\frac {\partial u_{x}}{\partial x}}} Similarly, the normal strain in the y {\displaystyle y} -direction, and z {\displaystyle z} -direction, becomes ε y = ∂ u y ∂ y , ε z = ∂ u z ∂ z {\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}} The engineering shear strain, or the change in angle between two originally orthogonal material lines, in this case line A C ¯ {\displaystyle {\overline {AC}}} and A B ¯ {\displaystyle {\overline {AB}}} , is defined as γ x y = α + β {\displaystyle \gamma _{xy}=\alpha +\beta } From the geometry of Figure 1 we have tan ⁡ α = ∂ u y ∂ x d x d x + ∂ u x ∂ x d x = ∂ u y ∂ x 1 + ∂ u x ∂ x , tan ⁡ β = ∂ u x ∂ y d y d y + ∂ u y ∂ y d y = ∂ u x ∂ y 1 + ∂ u y ∂ y {\displaystyle \tan \alpha ={\frac {{\dfrac {\partial u_{y}}{\partial x}}dx}{dx+{\dfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\dfrac {\partial u_{y}}{\partial x}}{1+{\dfrac {\partial u_{x}}{\partial x}}}}\quad ,\qquad \tan \beta ={\frac {{\dfrac {\partial u_{x}}{\partial y}}dy}{dy+{\dfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\dfrac {\partial u_{x}}{\partial y}}{1+{\dfrac {\partial u_{y}}{\partial y}}}}} For small rotations, i.e., α {\displaystyle \alpha } and β {\displaystyle \beta } are ≪ 1 {\displaystyle \ll 1} we have tan ⁡ α ≈ α , tan ⁡ β ≈ β {\displaystyle \tan \alpha \approx \alpha \quad ,\qquad \tan \beta \approx \beta } and, again, for small displacement gradients, we have α = ∂ u y ∂ x , β = ∂ u x ∂ y {\displaystyle \alpha ={\frac {\partial u_{y}}{\partial x}}\quad ,\qquad \beta ={\frac {\partial u_{x}}{\partial y}}} thus γ x y = α + β = ∂ u y ∂ x + ∂ u x ∂ y {\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}} By interchanging x {\displaystyle x} and y {\displaystyle y} and u x {\displaystyle u_{x}} and u y {\displaystyle u_{y}} , it can be shown that γ x y = γ y x {\displaystyle \gamma _{xy}=\gamma _{yx}} . Similarly, for the y {\displaystyle y} - z {\displaystyle z} and x {\displaystyle x} - z {\displaystyle z} planes, we have γ y z = γ z y = ∂ u y ∂ z + ∂ u z ∂ y , γ z x = γ x z = ∂ u z ∂ x + ∂ u x ∂ z {\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}} It can be seen that the tensorial shear strain components of the infinitesimal strain tensor can then be expressed using the engineering strain definition, γ {\displaystyle \gamma } , as [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ε x x γ x y / 2 γ x z / 2 γ y x / 2 ε y y γ y z / 2 γ z x / 2 γ z y / 2 ε z z ] {\displaystyle {\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&\gamma _{xy}/2&\gamma _{xz}/2\\\gamma _{yx}/2&\varepsilon _{yy}&\gamma _{yz}/2\\\gamma _{zx}/2&\gamma _{zy}/2&\varepsilon _{zz}\\\end{bmatrix}}} === Physical interpretation === From finite strain theory we have d x 2 − d X 2 = d X ⋅ 2 E ⋅ d X or ( d x ) 2 − ( d X ) 2 = 2 E K L d X K d X L {\displaystyle d\mathbf {x} ^{2}-d\mathbf {X} ^{2}=d\mathbf {X} \cdot 2\mathbf {E} \cdot d\mathbf {X} \quad {\text{or}}\quad (dx)^{2}-(dX)^{2}=2E_{KL}\,dX_{K}\,dX_{L}} For infinitesimal strains then we have d x 2 − d X 2 = d X ⋅ 2 ε ⋅ d X or ( d x ) 2 − ( d X ) 2 = 2 ε K L d X K d X L {\displaystyle d\mathbf {x} ^{2}-d\mathbf {X} ^{2}=d\mathbf {X} \cdot 2\mathbf {\boldsymbol {\varepsilon }} \cdot d\mathbf {X} \quad {\text{or}}\quad (dx)^{2}-(dX)^{2}=2\varepsilon _{KL}\,dX_{K}\,dX_{L}} Dividing by ( d X ) 2 {\displaystyle (dX)^{2}} we have d x − d X d X d x + d X d X = 2 ε i j d X i d X d X j d X {\displaystyle {\frac {dx-dX}{dX}}{\frac {dx+dX}{dX}}=2\varepsilon _{ij}{\frac {dX_{i}}{dX}}{\frac {dX_{j}}{dX}}} For small deformations we assume that d x ≈ d X {\displaystyle dx\approx dX} , thus the second term of the left hand side becomes: d x + d X d X ≈ 2 {\displaystyle {\frac {dx+dX}{dX}}\approx 2} . Then we have d x − d X d X = ε i j N i N j = N ⋅ ε ⋅ N {\displaystyle {\frac {dx-dX}{dX}}=\varepsilon _{ij}N_{i}N_{j}=\mathbf {N} \cdot {\boldsymbol {\varepsilon }}\cdot \mathbf {N} } where N i = d X i d X {\displaystyle N_{i}={\frac {dX_{i}}{dX}}} , is the unit vector in the direction of d X {\displaystyle d\mathbf {X} } , and the left-hand-side expression is the normal strain e ( N ) {\displaystyle e_{(\mathbf {N} )}} in the direction of N {\displaystyle \mathbf {N} } . For the particular case of N {\displaystyle \mathbf {N} } in the X 1 {\displaystyle X_{1}} direction, i.e., N = I 1 {\displaystyle \mathbf {N} =\mathbf {I} _{1}} , we have e ( I 1 ) = I 1 ⋅ ε ⋅ I 1 = ε 11 . {\displaystyle e_{(\mathbf {I} _{1})}=\mathbf {I} _{1}\cdot {\boldsymbol {\varepsilon }}\cdot \mathbf {I} _{1}=\varepsilon _{11}.} Similarly, for N = I 2 {\displaystyle \mathbf {N} =\mathbf {I} _{2}} and N = I 3 {\displaystyle \mathbf {N} =\mathbf {I} _{3}} we can find the normal strains ε 22 {\displaystyle \varepsilon _{22}} and ε 33 {\displaystyle \varepsilon _{33}} , respectively. Therefore, the diagonal elements of the infinitesimal strain tensor are the normal strains in the coordinate directions. === Strain transformation rules === If we choose an orthonormal coordinate system ( e 1 , e 2 , e 3 {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}} ) we can write the tensor in terms of components with respect to those base vectors as ε = ∑ i = 1 3 ∑ j = 1 3 ε i j e i ⊗ e j {\displaystyle {\boldsymbol {\varepsilon }}=\sum _{i=1}^{3}\sum _{j=1}^{3}\varepsilon _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}} In matrix form, ε _ _ = [ ε 11 ε 12 ε 13 ε 12 ε 22 ε 23 ε 13 ε 23 ε 33 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{12}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{13}&\varepsilon _{23}&\varepsilon _{33}\end{bmatrix}}} We can easily choose to use another orthonormal coordinate system ( e ^ 1 , e ^ 2 , e ^ 3 {\displaystyle {\hat {\mathbf {e} }}_{1},{\hat {\mathbf {e} }}_{2},{\hat {\mathbf {e} }}_{3}} ) instead. In that case the components of the tensor are different, say ε = ∑ i = 1 3 ∑ j = 1 3 ε ^ i j e ^ i ⊗ e ^ j ⟹ ε ^ _ _ = [ ε ^ 11 ε ^ 12 ε ^ 13 ε ^ 12 ε ^ 22 ε ^ 23 ε ^ 13 ε ^ 23 ε ^ 33 ] {\displaystyle {\boldsymbol {\varepsilon }}=\sum _{i=1}^{3}\sum _{j=1}^{3}{\hat {\varepsilon }}_{ij}{\hat {\mathbf {e} }}_{i}\otimes {\hat {\mathbf {e} }}_{j}\quad \implies \quad {\underline {\underline {\hat {\boldsymbol {\varepsilon }}}}}={\begin{bmatrix}{\hat {\varepsilon }}_{11}&{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{13}\\{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{22}&{\hat {\varepsilon }}_{23}\\{\hat {\varepsilon }}_{13}&{\hat {\varepsilon }}_{23}&{\hat {\varepsilon }}_{33}\end{bmatrix}}} The components of the strain in the two coordinate systems are related by ε ^ i j = ℓ i p ℓ j q ε p q {\displaystyle {\hat {\varepsilon }}_{ij}=\ell _{ip}~\ell _{jq}~\varepsilon _{pq}} where the Einstein summation convention for repeated indices has been used and ℓ i j = e ^ i ⋅ e j {\displaystyle \ell _{ij}={\hat {\mathbf {e} }}_{i}\cdot {\mathbf {e} }_{j}} . In matrix form ε ^ _ _ = L _ _ ε _ _ L _ _ T {\displaystyle {\underline {\underline {\hat {\boldsymbol {\varepsilon }}}}}={\underline {\underline {\mathbf {L} }}}~{\underline {\underline {\boldsymbol {\varepsilon }}}}~{\underline {\underline {\mathbf {L} }}}^{T}} or [ ε ^ 11 ε ^ 12 ε ^ 13 ε ^ 21 ε ^ 22 ε ^ 23 ε ^ 31 ε ^ 32 ε ^ 33 ] = [ ℓ 11 ℓ 12 ℓ 13 ℓ 21 ℓ 22 ℓ 23 ℓ 31 ℓ 32 ℓ 33 ] [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] [ ℓ 11 ℓ 12 ℓ 13 ℓ 21 ℓ 22 ℓ 23 ℓ 31 ℓ 32 ℓ 33 ] T {\displaystyle {\begin{bmatrix}{\hat {\varepsilon }}_{11}&{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{13}\\{\hat {\varepsilon }}_{21}&{\hat {\varepsilon }}_{22}&{\hat {\varepsilon }}_{23}\\{\hat {\varepsilon }}_{31}&{\hat {\varepsilon }}_{32}&{\hat {\varepsilon }}_{33}\end{bmatrix}}={\begin{bmatrix}\ell _{11}&\ell _{12}&\ell _{13}\\\ell _{21}&\ell _{22}&\ell _{23}\\\ell _{31}&\ell _{32}&\ell _{33}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\end{bmatrix}}{\begin{bmatrix}\ell _{11}&\ell _{12}&\ell _{13}\\\ell _{21}&\ell _{22}&\ell _{23}\\\ell _{31}&\ell _{32}&\ell _{33}\end{bmatrix}}^{T}} === Strain invariants === Certain operations on the strain tensor give the same result without regard to which orthonormal coordinate system is used to represent the components of strain. The results of these operations are called strain invariants. The most commonly used strain invariants are I 1 = t r ( ε ) I 2 = 1 2 { [ t r ( ε ) ] 2 − t r ( ε 2 ) } I 3 = det ( ε ) {\displaystyle {\begin{aligned}I_{1}&=\mathrm {tr} ({\boldsymbol {\varepsilon }})\\I_{2}&={\tfrac {1}{2}}\{[\mathrm {tr} ({\boldsymbol {\varepsilon }})]^{2}-\mathrm {tr} ({\boldsymbol {\varepsilon }}^{2})\}\\I_{3}&=\det({\boldsymbol {\varepsilon }})\end{aligned}}} In terms of components I 1 = ε 11 + ε 22 + ε 33 I 2 = ε 11 ε 22 + ε 22 ε 33 + ε 33 ε 11 − ε 12 2 − ε 23 2 − ε 31 2 I 3 = ε 11 ( ε 22 ε 33 − ε 23 2 ) − ε 12 ( ε 21 ε 33 − ε 23 ε 31 ) + ε 13 ( ε 21 ε 32 − ε 22 ε 31 ) {\displaystyle {\begin{aligned}I_{1}&=\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}\\I_{2}&=\varepsilon _{11}\varepsilon _{22}+\varepsilon _{22}\varepsilon _{33}+\varepsilon _{33}\varepsilon _{11}-\varepsilon _{12}^{2}-\varepsilon _{23}^{2}-\varepsilon _{31}^{2}\\I_{3}&=\varepsilon _{11}(\varepsilon _{22}\varepsilon _{33}-\varepsilon _{23}^{2})-\varepsilon _{12}(\varepsilon _{21}\varepsilon _{33}-\varepsilon _{23}\varepsilon _{31})+\varepsilon _{13}(\varepsilon _{21}\varepsilon _{32}-\varepsilon _{22}\varepsilon _{31})\end{aligned}}} === Principal strains === It can be shown that it is possible to find a coordinate system ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) in which the components of the strain tensor are ε _ _ = [ ε 1 0 0 0 ε 2 0 0 0 ε 3 ] ⟹ ε = ε 1 n 1 ⊗ n 1 + ε 2 n 2 ⊗ n 2 + ε 3 n 3 ⊗ n 3 {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{1}&0&0\\0&\varepsilon _{2}&0\\0&0&\varepsilon _{3}\end{bmatrix}}\quad \implies \quad {\boldsymbol {\varepsilon }}=\varepsilon _{1}\mathbf {n} _{1}\otimes \mathbf {n} _{1}+\varepsilon _{2}\mathbf {n} _{2}\otimes \mathbf {n} _{2}+\varepsilon _{3}\mathbf {n} _{3}\otimes \mathbf {n} _{3}} The components of the strain tensor in the ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) coordinate system are called the principal strains and the directions n i {\displaystyle \mathbf {n} _{i}} are called the directions of principal strain. Since there are no shear strain components in this coordinate system, the principal strains represent the maximum and minimum stretches of an elemental volume. If we are given the components of the strain tensor in an arbitrary orthonormal coordinate system, we can find the principal strains using an eigenvalue decomposition determined by solving the system of equations ( ε _ _ − ε i I _ _ ) n i = 0 _ {\displaystyle ({\underline {\underline {\boldsymbol {\varepsilon }}}}-\varepsilon _{i}~{\underline {\underline {\mathbf {I} }}})~\mathbf {n} _{i}={\underline {\mathbf {0} }}} This system of equations is equivalent to finding the vector n i {\displaystyle \mathbf {n} _{i}} along which the strain tensor becomes a pure stretch with no shear component. === Volumetric strain === The volumetric strain, also called bulk strain, is the relative variation of the volume, as arising from dilation or compression; it is the first strain invariant or trace of the tensor: δ = Δ V V 0 = I 1 = ε 11 + ε 22 + ε 33 {\displaystyle \delta ={\frac {\Delta V}{V_{0}}}=I_{1}=\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}} Actually, if we consider a cube with an edge length a, it is a quasi-cube after the deformation (the variations of the angles do not change the volume) with the dimensions a ⋅ ( 1 + ε 11 ) × a ⋅ ( 1 + ε 22 ) × a ⋅ ( 1 + ε 33 ) {\displaystyle a\cdot (1+\varepsilon _{11})\times a\cdot (1+\varepsilon _{22})\times a\cdot (1+\varepsilon _{33})} and V0 = a3, thus Δ V V 0 = ( 1 + ε 11 + ε 22 + ε 33 + ε 11 ⋅ ε 22 + ε 11 ⋅ ε 33 + ε 22 ⋅ ε 33 + ε 11 ⋅ ε 22 ⋅ ε 33 ) ⋅ a 3 − a 3 a 3 {\displaystyle {\frac {\Delta V}{V_{0}}}={\frac {\left(1+\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}+\varepsilon _{11}\cdot \varepsilon _{33}+\varepsilon _{22}\cdot \varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}\right)\cdot a^{3}-a^{3}}{a^{3}}}} as we consider small deformations, 1 ≫ ε i i ≫ ε i i ⋅ ε j j ≫ ε 11 ⋅ ε 22 ⋅ ε 33 {\displaystyle 1\gg \varepsilon _{ii}\gg \varepsilon _{ii}\cdot \varepsilon _{jj}\gg \varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}} therefore the formula. In case of pure shear, we can see that there is no change of the volume. === Strain deviator tensor === The infinitesimal strain tensor ε i j {\displaystyle \varepsilon _{ij}} , similarly to the Cauchy stress tensor, can be expressed as the sum of two other tensors: a mean strain tensor or volumetric strain tensor or spherical strain tensor, ε M δ i j {\displaystyle \varepsilon _{M}\delta _{ij}} , related to dilation or volume change; and a deviatoric component called the strain deviator tensor, ε i j ′ {\displaystyle \varepsilon '_{ij}} , related to distortion. ε i j = ε i j ′ + ε M δ i j {\displaystyle \varepsilon _{ij}=\varepsilon '_{ij}+\varepsilon _{M}\delta _{ij}} where ε M {\displaystyle \varepsilon _{M}} is the mean strain given by ε M = ε k k 3 = ε 11 + ε 22 + ε 33 3 = 1 3 I 1 e {\displaystyle \varepsilon _{M}={\frac {\varepsilon _{kk}}{3}}={\frac {\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}}{3}}={\tfrac {1}{3}}I_{1}^{e}} The deviatoric strain tensor can be obtained by subtracting the mean strain tensor from the infinitesimal strain tensor: ε i j ′ = ε i j − ε k k 3 δ i j [ ε 11 ′ ε 12 ′ ε 13 ′ ε 21 ′ ε 22 ′ ε 23 ′ ε 31 ′ ε 32 ′ ε 33 ′ ] = [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] − [ ε M 0 0 0 ε M 0 0 0 ε M ] = [ ε 11 − ε M ε 12 ε 13 ε 21 ε 22 − ε M ε 23 ε 31 ε 32 ε 33 − ε M ] {\displaystyle {\begin{aligned}\ \varepsilon '_{ij}&=\varepsilon _{ij}-{\frac {\varepsilon _{kk}}{3}}\delta _{ij}\\{\begin{bmatrix}\varepsilon '_{11}&\varepsilon '_{12}&\varepsilon '_{13}\\\varepsilon '_{21}&\varepsilon '_{22}&\varepsilon '_{23}\\\varepsilon '_{31}&\varepsilon '_{32}&\varepsilon '_{33}\\\end{bmatrix}}&={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\\\end{bmatrix}}-{\begin{bmatrix}\varepsilon _{M}&0&0\\0&\varepsilon _{M}&0\\0&0&\varepsilon _{M}\\\end{bmatrix}}\\&={\begin{bmatrix}\varepsilon _{11}-\varepsilon _{M}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}-\varepsilon _{M}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}-\varepsilon _{M}\\\end{bmatrix}}\\\end{aligned}}} === Octahedral strains === Let ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) be the directions of the three principal strains. An octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on an octahedral plane is called the octahedral shear strain and is given by γ o c t = 2 3 ( ε 1 − ε 2 ) 2 + ( ε 2 − ε 3 ) 2 + ( ε 3 − ε 1 ) 2 {\displaystyle \gamma _{\mathrm {oct} }={\tfrac {2}{3}}{\sqrt {(\varepsilon _{1}-\varepsilon _{2})^{2}+(\varepsilon _{2}-\varepsilon _{3})^{2}+(\varepsilon _{3}-\varepsilon _{1})^{2}}}} where ε 1 , ε 2 , ε 3 {\displaystyle \varepsilon _{1},\varepsilon _{2},\varepsilon _{3}} are the principal strains. The normal strain on an octahedral plane is given by ε o c t = 1 3 ( ε 1 + ε 2 + ε 3 ) {\displaystyle \varepsilon _{\mathrm {oct} }={\tfrac {1}{3}}(\varepsilon _{1}+\varepsilon _{2}+\varepsilon _{3})} === Equivalent strain === A scalar quantity called the equivalent strain, or the von Mises equivalent strain, is often used to describe the state of strain in solids. Several definitions of equivalent strain can be found in the literature. A definition that is commonly used in the literature on plasticity is ε e q = 2 3 ε d e v : ε d e v = 2 3 ε i j d e v ε i j d e v ; ε d e v = ε − 1 3 t r ( ε ) I {\displaystyle \varepsilon _{\mathrm {eq} }={\sqrt {{\tfrac {2}{3}}{\boldsymbol {\varepsilon }}^{\mathrm {dev} }:{\boldsymbol {\varepsilon }}^{\mathrm {dev} }}}={\sqrt {{\tfrac {2}{3}}\varepsilon _{ij}^{\mathrm {dev} }\varepsilon _{ij}^{\mathrm {dev} }}}~;~~{\boldsymbol {\varepsilon }}^{\mathrm {dev} }={\boldsymbol {\varepsilon }}-{\tfrac {1}{3}}\mathrm {tr} ({\boldsymbol {\varepsilon }})~{\boldsymbol {I}}} This quantity is work conjugate to the equivalent stress defined as σ e q = 3 2 σ d e v : σ d e v {\displaystyle \sigma _{\mathrm {eq} }={\sqrt {{\tfrac {3}{2}}{\boldsymbol {\sigma }}^{\mathrm {dev} }:{\boldsymbol {\sigma }}^{\mathrm {dev} }}}} == Compatibility equations == For prescribed strain components ε i j {\displaystyle \varepsilon _{ij}} the strain tensor equation u i , j + u j , i = 2 ε i j {\displaystyle u_{i,j}+u_{j,i}=2\varepsilon _{ij}} represents a system of six differential equations for the determination of three displacements components u i {\displaystyle u_{i}} , giving an over-determined system. Thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components. With the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the "Saint Venant compatibility equations". The compatibility functions serve to assure a single-valued continuous displacement function u i {\displaystyle u_{i}} . If the elastic medium is visualised as a set of infinitesimal cubes in the unstrained state, after the medium is strained, an arbitrary strain tensor may not yield a situation in which the distorted cubes still fit together without overlapping. In index notation, the compatibility equations are expressed as ε i j , k m + ε k m , i j − ε i k , j m − ε j m , i k = 0 {\displaystyle \varepsilon _{ij,km}+\varepsilon _{km,ij}-\varepsilon _{ik,jm}-\varepsilon _{jm,ik}=0} In engineering notation, ∂ 2 ϵ x ∂ y 2 + ∂ 2 ϵ y ∂ x 2 = 2 ∂ 2 ϵ x y ∂ x ∂ y {\displaystyle {\frac {\partial ^{2}\epsilon _{x}}{\partial y^{2}}}+{\frac {\partial ^{2}\epsilon _{y}}{\partial x^{2}}}=2{\frac {\partial ^{2}\epsilon _{xy}}{\partial x\partial y}}} ∂ 2 ϵ y ∂ z 2 + ∂ 2 ϵ z ∂ y 2 = 2 ∂ 2 ϵ y z ∂ y ∂ z {\displaystyle {\frac {\partial ^{2}\epsilon _{y}}{\partial z^{2}}}+{\frac {\partial ^{2}\epsilon _{z}}{\partial y^{2}}}=2{\frac {\partial ^{2}\epsilon _{yz}}{\partial y\partial z}}} ∂ 2 ϵ x ∂ z 2 + ∂ 2 ϵ z ∂ x 2 = 2 ∂ 2 ϵ z x ∂ z ∂ x {\displaystyle {\frac {\partial ^{2}\epsilon _{x}}{\partial z^{2}}}+{\frac {\partial ^{2}\epsilon _{z}}{\partial x^{2}}}=2{\frac {\partial ^{2}\epsilon _{zx}}{\partial z\partial x}}} ∂ 2 ϵ x ∂ y ∂ z = ∂ ∂ x ( − ∂ ϵ y z ∂ x + ∂ ϵ z x ∂ y + ∂ ϵ x y ∂ z ) {\displaystyle {\frac {\partial ^{2}\epsilon _{x}}{\partial y\partial z}}={\frac {\partial }{\partial x}}\left(-{\frac {\partial \epsilon _{yz}}{\partial x}}+{\frac {\partial \epsilon _{zx}}{\partial y}}+{\frac {\partial \epsilon _{xy}}{\partial z}}\right)} ∂ 2 ϵ y ∂ z ∂ x = ∂ ∂ y ( ∂ ϵ y z ∂ x − ∂ ϵ z x ∂ y + ∂ ϵ x y ∂ z ) {\displaystyle {\frac {\partial ^{2}\epsilon _{y}}{\partial z\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial \epsilon _{yz}}{\partial x}}-{\frac {\partial \epsilon _{zx}}{\partial y}}+{\frac {\partial \epsilon _{xy}}{\partial z}}\right)} ∂ 2 ϵ z ∂ x ∂ y = ∂ ∂ z ( ∂ ϵ y z ∂ x + ∂ ϵ z x ∂ y − ∂ ϵ x y ∂ z ) {\displaystyle {\frac {\partial ^{2}\epsilon _{z}}{\partial x\partial y}}={\frac {\partial }{\partial z}}\left({\frac {\partial \epsilon _{yz}}{\partial x}}+{\frac {\partial \epsilon _{zx}}{\partial y}}-{\frac {\partial \epsilon _{xy}}{\partial z}}\right)} == Special cases == === Plane strain === In real engineering components, stress (and strain) are 3-D tensors but in prismatic structures such as a long metal billet, the length of the structure is much greater than the other two dimensions. The strains associated with length, i.e., the normal strain ε 33 {\displaystyle \varepsilon _{33}} and the shear strains ε 13 {\displaystyle \varepsilon _{13}} and ε 23 {\displaystyle \varepsilon _{23}} (if the length is the 3-direction) are constrained by nearby material and are small compared to the cross-sectional strains. Plane strain is then an acceptable approximation. The strain tensor for plane strain is written as: ε _ _ = [ ε 11 ε 12 0 ε 21 ε 22 0 0 0 0 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&0\\\varepsilon _{21}&\varepsilon _{22}&0\\0&0&0\end{bmatrix}}} in which the double underline indicates a second order tensor. This strain state is called plane strain. The corresponding stress tensor is: σ _ _ = [ σ 11 σ 12 0 σ 21 σ 22 0 0 0 σ 33 ] {\displaystyle {\underline {\underline {\boldsymbol {\sigma }}}}={\begin{bmatrix}\sigma _{11}&\sigma _{12}&0\\\sigma _{21}&\sigma _{22}&0\\0&0&\sigma _{33}\end{bmatrix}}} in which the non-zero σ 33 {\displaystyle \sigma _{33}} is needed to maintain the constraint ϵ 33 = 0 {\displaystyle \epsilon _{33}=0} . This stress term can be temporarily removed from the analysis to leave only the in-plane terms, effectively reducing the 3-D problem to a much simpler 2-D problem. === Antiplane strain === Antiplane strain is another special state of strain that can occur in a body, for instance in a region close to a screw dislocation. The strain tensor for antiplane strain is given by ε _ _ = [ 0 0 ε 13 0 0 ε 23 ε 13 ε 23 0 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}0&0&\varepsilon _{13}\\0&0&\varepsilon _{23}\\\varepsilon _{13}&\varepsilon _{23}&0\end{bmatrix}}} == Relation to infinitesimal rotation tensor == The infinitesimal strain tensor is defined as ε = 1 2 [ ∇ u + ( ∇ u ) T ] {\displaystyle {\boldsymbol {\varepsilon }}={\frac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}]} Therefore the displacement gradient can be expressed as ∇ u = ε + W {\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\boldsymbol {\varepsilon }}+{\boldsymbol {W}}} where W := 1 2 [ ∇ u − ( ∇ u ) T ] {\displaystyle {\boldsymbol {W}}:={\frac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} -({\boldsymbol {\nabla }}\mathbf {u} )^{T}]} The quantity W {\displaystyle {\boldsymbol {W}}} is the infinitesimal rotation tensor or infinitesimal angular displacement tensor (related to the infinitesimal rotation matrix). This tensor is skew symmetric. For infinitesimal deformations the scalar components of W {\displaystyle {\boldsymbol {W}}} satisfy the condition | W i j | ≪ 1 {\displaystyle |W_{ij}|\ll 1} . Note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal. === The axial vector === A skew symmetric second-order tensor has three independent scalar components. These three components are used to define an axial vector, w {\displaystyle \mathbf {w} } , as follows W i j = − ϵ i j k w k ; w i = − 1 2 ϵ i j k W j k {\displaystyle W_{ij}=-\epsilon _{ijk}~w_{k}~;~~w_{i}=-{\tfrac {1}{2}}~\epsilon _{ijk}~W_{jk}} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the permutation symbol. In matrix form W _ _ = [ 0 − w 3 w 2 w 3 0 − w 1 − w 2 w 1 0 ] ; w _ = [ w 1 w 2 w 3 ] {\displaystyle {\underline {\underline {\boldsymbol {W}}}}={\begin{bmatrix}0&-w_{3}&w_{2}\\w_{3}&0&-w_{1}\\-w_{2}&w_{1}&0\end{bmatrix}}~;~~{\underline {\mathbf {w} }}={\begin{bmatrix}w_{1}\\w_{2}\\w_{3}\end{bmatrix}}} The axial vector is also called the infinitesimal rotation vector. The rotation vector is related to the displacement gradient by the relation w = 1 2 ∇ × u {\displaystyle \mathbf {w} ={\tfrac {1}{2}}~{\boldsymbol {\nabla }}\times \mathbf {u} } In index notation w i = 1 2 ϵ i j k u k , j {\displaystyle w_{i}={\tfrac {1}{2}}~\epsilon _{ijk}~u_{k,j}} If ‖ W ‖ ≪ 1 {\displaystyle \lVert {\boldsymbol {W}}\rVert \ll 1} and ε = 0 {\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {0}}} then the material undergoes an approximate rigid body rotation of magnitude | w | {\displaystyle |\mathbf {w} |} around the vector w {\displaystyle \mathbf {w} } . === Relation between the strain tensor and the rotation vector === Given a continuous, single-valued displacement field u {\displaystyle \mathbf {u} } and the corresponding infinitesimal strain tensor ε {\displaystyle {\boldsymbol {\varepsilon }}} , we have (see Tensor derivative (continuum mechanics)) ∇ × ε = e i j k ε l j , i e k ⊗ e l = 1 2 e i j k [ u l , j i + u j , l i ] e k ⊗ e l {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=e_{ijk}~\varepsilon _{lj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{l}={\tfrac {1}{2}}~e_{ijk}~[u_{l,ji}+u_{j,li}]~\mathbf {e} _{k}\otimes \mathbf {e} _{l}} Since a change in the order of differentiation does not change the result, u l , j i = u l , i j {\displaystyle u_{l,ji}=u_{l,ij}} . Therefore e i j k u l , j i = ( e 12 k + e 21 k ) u l , 12 + ( e 13 k + e 31 k ) u l , 13 + ( e 23 k + e 32 k ) u l , 32 = 0 {\displaystyle e_{ijk}u_{l,ji}=(e_{12k}+e_{21k})u_{l,12}+(e_{13k}+e_{31k})u_{l,13}+(e_{23k}+e_{32k})u_{l,32}=0} Also 1 2 e i j k u j , l i = ( 1 2 e i j k u j , i ) , l = ( 1 2 e k i j u j , i ) , l = w k , l {\displaystyle {\tfrac {1}{2}}~e_{ijk}~u_{j,li}=\left({\tfrac {1}{2}}~e_{ijk}~u_{j,i}\right)_{,l}=\left({\tfrac {1}{2}}~e_{kij}~u_{j,i}\right)_{,l}=w_{k,l}} Hence ∇ × ε = w k , l e k ⊗ e l = ∇ w {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=w_{k,l}~\mathbf {e} _{k}\otimes \mathbf {e} _{l}={\boldsymbol {\nabla }}\mathbf {w} } === Relation between rotation tensor and rotation vector === From an important identity regarding the curl of a tensor we know that for a continuous, single-valued displacement field u {\displaystyle \mathbf {u} } , ∇ × ( ∇ u ) = 0 . {\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\mathbf {u} )={\boldsymbol {0}}.} Since ∇ u = ε + W {\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\boldsymbol {\varepsilon }}+{\boldsymbol {W}}} we have ∇ × W = − ∇ × ε = − ∇ w . {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {W}}=-{\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=-{\boldsymbol {\nabla }}\mathbf {w} .} == Strain tensor in non-Cartesian coordinates == === Strain tensor in cylindrical coordinates === In cylindrical polar coordinates ( r , θ , z {\displaystyle r,\theta ,z} ), the displacement vector can be written as u = u r e r + u θ e θ + u z e z {\displaystyle \mathbf {u} =u_{r}~\mathbf {e} _{r}+u_{\theta }~\mathbf {e} _{\theta }+u_{z}~\mathbf {e} _{z}} The components of the strain tensor in a cylindrical coordinate system are given by: ε r r = ∂ u r ∂ r ε θ θ = 1 r ( ∂ u θ ∂ θ + u r ) ε z z = ∂ u z ∂ z ε r θ = 1 2 ( 1 r ∂ u r ∂ θ + ∂ u θ ∂ r − u θ r ) ε θ z = 1 2 ( ∂ u θ ∂ z + 1 r ∂ u z ∂ θ ) ε z r = 1 2 ( ∂ u r ∂ z + ∂ u z ∂ r ) {\displaystyle {\begin{aligned}\varepsilon _{rr}&={\cfrac {\partial u_{r}}{\partial r}}\\\varepsilon _{\theta \theta }&={\cfrac {1}{r}}\left({\cfrac {\partial u_{\theta }}{\partial \theta }}+u_{r}\right)\\\varepsilon _{zz}&={\cfrac {\partial u_{z}}{\partial z}}\\\varepsilon _{r\theta }&={\cfrac {1}{2}}\left({\cfrac {1}{r}}{\cfrac {\partial u_{r}}{\partial \theta }}+{\cfrac {\partial u_{\theta }}{\partial r}}-{\cfrac {u_{\theta }}{r}}\right)\\\varepsilon _{\theta z}&={\cfrac {1}{2}}\left({\cfrac {\partial u_{\theta }}{\partial z}}+{\cfrac {1}{r}}{\cfrac {\partial u_{z}}{\partial \theta }}\right)\\\varepsilon _{zr}&={\cfrac {1}{2}}\left({\cfrac {\partial u_{r}}{\partial z}}+{\cfrac {\partial u_{z}}{\partial r}}\right)\end{aligned}}} === Strain tensor in spherical coordinates === In spherical coordinates ( r , θ , ϕ {\displaystyle r,\theta ,\phi } ), the displacement vector can be written as u = u r e r + u θ e θ + u ϕ e ϕ {\displaystyle \mathbf {u} =u_{r}~\mathbf {e} _{r}+u_{\theta }~\mathbf {e} _{\theta }+u_{\phi }~\mathbf {e} _{\phi }} The components of the strain tensor in a spherical coordinate system are given by ε r r = ∂ u r ∂ r ε θ θ = 1 r ( ∂ u θ ∂ θ + u r ) ε ϕ ϕ = 1 r sin ⁡ θ ( ∂ u ϕ ∂ ϕ + u r sin ⁡ θ + u θ cos ⁡ θ ) ε r θ = 1 2 ( 1 r ∂ u r ∂ θ + ∂ u θ ∂ r − u θ r ) ε θ ϕ = 1 2 r ( 1 sin ⁡ θ ∂ u θ ∂ ϕ + ∂ u ϕ ∂ θ − u ϕ cot ⁡ θ ) ε ϕ r = 1 2 ( 1 r sin ⁡ θ ∂ u r ∂ ϕ + ∂ u ϕ ∂ r − u ϕ r ) {\displaystyle {\begin{aligned}\varepsilon _{rr}&={\cfrac {\partial u_{r}}{\partial r}}\\\varepsilon _{\theta \theta }&={\cfrac {1}{r}}\left({\cfrac {\partial u_{\theta }}{\partial \theta }}+u_{r}\right)\\\varepsilon _{\phi \phi }&={\cfrac {1}{r\sin \theta }}\left({\cfrac {\partial u_{\phi }}{\partial \phi }}+u_{r}\sin \theta +u_{\theta }\cos \theta \right)\\\varepsilon _{r\theta }&={\cfrac {1}{2}}\left({\cfrac {1}{r}}{\cfrac {\partial u_{r}}{\partial \theta }}+{\cfrac {\partial u_{\theta }}{\partial r}}-{\cfrac {u_{\theta }}{r}}\right)\\\varepsilon _{\theta \phi }&={\cfrac {1}{2r}}\left({\cfrac {1}{\sin \theta }}{\cfrac {\partial u_{\theta }}{\partial \phi }}+{\cfrac {\partial u_{\phi }}{\partial \theta }}-u_{\phi }\cot \theta \right)\\\varepsilon _{\phi r}&={\cfrac {1}{2}}\left({\cfrac {1}{r\sin \theta }}{\cfrac {\partial u_{r}}{\partial \phi }}+{\cfrac {\partial u_{\phi }}{\partial r}}-{\cfrac {u_{\phi }}{r}}\right)\end{aligned}}} == See also == Deformation (mechanics) Compatibility (mechanics) Stress tensor Strain gauge Elasticity tensor Stress–strain curve Hooke's law Poisson's ratio Finite strain theory Strain rate Plane stress Digital image correlation == References == == External links ==
Wikipedia/Infinitesimal_strain_theory
In nonstandard analysis, the standard part function is a function from the limited (finite) hyperreal numbers to the real numbers. Briefly, the standard part function "rounds off" a finite hyperreal to the nearest real. It associates to every such hyperreal x {\displaystyle x} , the unique real x 0 {\displaystyle x_{0}} infinitely close to it, i.e. x − x 0 {\displaystyle x-x_{0}} is infinitesimal. As such, it is a mathematical implementation of the historical concept of adequality introduced by Pierre de Fermat, as well as Leibniz's Transcendental law of homogeneity. The standard part function was first defined by Abraham Robinson who used the notation ∘ x {\displaystyle {}^{\circ }x} for the standard part of a hyperreal x {\displaystyle x} (see Robinson 1974). This concept plays a key role in defining the concepts of the calculus, such as continuity, the derivative, and the integral, in nonstandard analysis. The latter theory is a rigorous formalization of calculations with infinitesimals. The standard part of x is sometimes referred to as its shadow. == Definition == Nonstandard analysis deals primarily with the pair R ⊆ ∗ R {\displaystyle \mathbb {R} \subseteq {}^{*}\mathbb {R} } , where the hyperreals ∗ R {\displaystyle {}^{*}\mathbb {R} } are an ordered field extension of the reals R {\displaystyle \mathbb {R} } , and contain infinitesimals, in addition to the reals. In the hyperreal line every real number has a collection of numbers (called a monad, or halo) of hyperreals infinitely close to it. The standard part function associates to a finite hyperreal x, the unique standard real number x0 that is infinitely close to it. The relationship is expressed symbolically by writing st ⁡ ( x ) = x 0 . {\displaystyle \operatorname {st} (x)=x_{0}.} The standard part of any infinitesimal is 0. Thus if N is an infinite hypernatural, then 1/N is infinitesimal, and st(1/N) = 0. If a hyperreal u {\displaystyle u} is represented by a Cauchy sequence ⟨ u n : n ∈ N ⟩ {\displaystyle \langle u_{n}:n\in \mathbb {N} \rangle } in the ultrapower construction, then st ⁡ ( u ) = lim n → ∞ u n . {\displaystyle \operatorname {st} (u)=\lim _{n\to \infty }u_{n}.} More generally, each finite u ∈ ∗ R {\displaystyle u\in {}^{*}\mathbb {R} } defines a Dedekind cut on the subset R ⊆ ∗ R {\displaystyle \mathbb {R} \subseteq {}^{*}\mathbb {R} } (via the total order on ∗ R {\displaystyle {}^{\ast }\mathbb {R} } ) and the corresponding real number is the standard part of u. == Not internal == The standard part function "st" is not defined by an internal set. There are several ways of explaining this. Perhaps the simplest is that its domain L, which is the collection of limited (i.e. finite) hyperreals, is not an internal set. Namely, since L is bounded (by any infinite hypernatural, for instance), L would have to have a least upper bound if L were internal, but L doesn't have a least upper bound. Alternatively, the range of "st" is R ⊆ ∗ R {\displaystyle \mathbb {R} \subseteq {}^{*}\mathbb {R} } , which is not internal; in fact every internal set in ∗ R {\displaystyle {}^{*}\mathbb {R} } that is a subset of R {\displaystyle \mathbb {R} } is necessarily finite. == Applications == All the traditional notions of calculus can be expressed in terms of the standard part function, as follows. === Derivative === The standard part function is used to define the derivative of a function f. If f is a real function, and h is infinitesimal, and if f′(x) exists, then f ′ ( x ) = st ⁡ ( f ( x + h ) − f ( x ) h ) . {\displaystyle f'(x)=\operatorname {st} \left({\frac {f(x+h)-f(x)}{h}}\right).} Alternatively, if y = f ( x ) {\displaystyle y=f(x)} , one takes an infinitesimal increment Δ x {\displaystyle \Delta x} , and computes the corresponding Δ y = f ( x + Δ x ) − f ( x ) {\displaystyle \Delta y=f(x+\Delta x)-f(x)} . One forms the ratio Δ y Δ x {\textstyle {\frac {\Delta y}{\Delta x}}} . The derivative is then defined as the standard part of the ratio: d y d x = st ⁡ ( Δ y Δ x ) . {\displaystyle {\frac {dy}{dx}}=\operatorname {st} \left({\frac {\Delta y}{\Delta x}}\right).} === Integral === Given a function f {\displaystyle f} on [ a , b ] {\displaystyle [a,b]} , one defines the integral ∫ a b f ( x ) d x {\textstyle \int _{a}^{b}f(x)\,dx} as the standard part of an infinite Riemann sum S ( f , a , b , Δ x ) {\displaystyle S(f,a,b,\Delta x)} when the value of Δ x {\displaystyle \Delta x} is taken to be infinitesimal, exploiting a hyperfinite partition of the interval [a,b]. === Limit === Given a sequence ( u n ) {\displaystyle (u_{n})} , its limit is defined by lim n → ∞ u n = st ⁡ ( u H ) {\textstyle \lim _{n\to \infty }u_{n}=\operatorname {st} (u_{H})} where H ∈ ∗ N ∖ N {\displaystyle H\in {}^{*}\mathbb {N} \setminus \mathbb {N} } is an infinite index. Here the limit is said to exist if the standard part is the same regardless of the infinite index chosen. === Continuity === A real function f {\displaystyle f} is continuous at a real point x {\displaystyle x} if and only if the composition st ∘ f {\displaystyle \operatorname {st} \circ f} is constant on the halo of x {\displaystyle x} . See microcontinuity for more details. == See also == Adequality Nonstandard calculus == References == == Further reading == H. Jerome Keisler. Elementary Calculus: An Infinitesimal Approach. First edition 1976; 2nd edition 1986. (This book is now out of print. The publisher has reverted the copyright to the author, who has made available the 2nd edition in .pdf format available for downloading at http://www.math.wisc.edu/~keisler/calc.html.) Abraham Robinson. Non-standard analysis. Reprint of the second (1974) edition. With a foreword by Wilhelmus A. J. Luxemburg. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1996. xx+293 pp. ISBN 0-691-04490-2
Wikipedia/Standard_part_function
In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus. == Essential features == While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common: Representing interactions between independent processes as communication (message-passing), rather than as modification of shared variables. Describing processes and systems using a small collection of primitives, and operators for combining those primitives. Defining algebraic laws for the process operators, which allow process expressions to be manipulated using equational reasoning. == Mathematics of processes == To define a process calculus, one starts with a set of names (or channels) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow: parallel composition of processes specification of which channels to use for sending and receiving data sequentialization of interactions hiding of interaction points recursion or process replication === Parallel composition === Parallel composition of two processes P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} , usually written P | Q {\displaystyle P\vert Q} , is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from P {\displaystyle {\mathit {P}}} to Q {\displaystyle {\mathit {Q}}} (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time. Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation. === Communication === Interaction can be (but isn't always) a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator (e.g. x ( v ) {\displaystyle x(v)} ) and an output operator (e.g. x ⟨ y ⟩ {\displaystyle x\langle y\rangle } ), both of which name an interaction point (here x {\displaystyle {\mathit {x}}} ) that is used to synchronise with a dual interaction primitive. Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In x ⟨ y ⟩ {\displaystyle x\langle y\rangle } , this data is y {\displaystyle y} . Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In x ( v ) {\displaystyle x(v)} , v {\displaystyle v} plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi. === Sequential composition === Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on x {\displaystyle {\mathit {x}}} and then send that data on y {\displaystyle {\mathit {y}}} . Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process x ( v ) ⋅ P {\displaystyle x(v)\cdot P} will wait for an input on x {\displaystyle {\mathit {x}}} . Only when this input has occurred will the process P {\displaystyle {\mathit {P}}} be activated, with the received data through x {\displaystyle {\mathit {x}}} substituted for identifier v {\displaystyle {\mathit {v}}} . === Reduction semantics === The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is: x ⟨ y ⟩ ⋅ P | x ( v ) ⋅ Q ⟶ P | Q [ y / v ] {\displaystyle x\langle y\rangle \cdot P\;\vert \;x(v)\cdot Q\longrightarrow P\;\vert \;Q[^{y}\!/\!_{v}]} The interpretation to this reduction rule is: The process x ⟨ y ⟩ ⋅ P {\displaystyle x\langle y\rangle \cdot P} sends a message, here y {\displaystyle {\mathit {y}}} , along the channel x {\displaystyle {\mathit {x}}} . Dually, the process x ( v ) ⋅ Q {\displaystyle x(v)\cdot Q} receives that message on channel x {\displaystyle {\mathit {x}}} . Once the message has been sent, x ⟨ y ⟩ ⋅ P {\displaystyle x\langle y\rangle \cdot P} becomes the process P {\displaystyle {\mathit {P}}} , while x ( v ) ⋅ Q {\displaystyle x(v)\cdot Q} becomes the process Q [ y / v ] {\displaystyle Q[^{y}\!/\!_{v}]} , which is Q {\displaystyle {\mathit {Q}}} with the place-holder v {\displaystyle {\mathit {v}}} substituted by y {\displaystyle {\mathit {y}}} , the data received on x {\displaystyle {\mathit {x}}} . The class of processes that P {\displaystyle {\mathit {P}}} is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus. === Hiding === Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name x {\displaystyle {\mathit {x}}} in P {\displaystyle {\mathit {P}}} can be expressed as ( ν x ) P {\displaystyle (\nu \;x)P} , while in CSP it might be written as P ∖ { x } {\displaystyle P\setminus \{x\}} . === Recursion and replication === The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication ! P {\displaystyle !P} can be understood as abbreviating the parallel composition of a countably infinite number of P {\displaystyle {\mathit {P}}} processes: ! P = P ∣ ! P {\displaystyle !P=P\mid !P} === Null process === Process calculi generally also include a null process (variously denoted as n i l {\displaystyle {\mathit {nil}}} , 0 {\displaystyle 0} , S T O P {\displaystyle {\mathit {STOP}}} , δ {\displaystyle \delta } , or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated. == Discrete and continuous process algebra == Process algebra has been studied for discrete time and continuous time (real time or dense time). == History == In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry. Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term process algebra to describe their work. CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi. == Current research == Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems. Developing new process calculi for better modeling of computational phenomena. Finding well-behaved subcalculi of a given process calculus. This is valuable because (1) most calculi are fairly wild in the sense that they are rather general and not much can be said about arbitrary processes; and (2) computational applications rarely exhaust the whole of a calculus. Rather they use only processes that are very constrained in form. Constraining the shape of processes is mostly studied by way of type systems. Logics for processes that allow one to reason about (essentially) arbitrary properties of processes, following the ideas of Hoare logic. Behavioural theory: what does it mean for two processes to be the same? How can we decide whether two processes are different or not? Can we find representatives for equivalence classes of processes? Generally, processes are considered to be the same if no context, that is other processes running in parallel, can detect a difference. Unfortunately, making this intuition precise is subtle and mostly yields unwieldy characterisations of equality (which in most cases must also be undecidable, as a consequence of the halting problem). Bisimulations are a technical tool that aids reasoning about process equivalences. Expressivity of calculi. Programming experience shows that certain problems are easier to solve in some languages than in others. This phenomenon calls for a more precise characterisation of the expressivity of calculi modeling computation than that afforded by the Church–Turing thesis. One way of doing this is to consider encodings between two formalisms and see what properties encodings can potentially preserve. The more properties can be preserved, the more expressive the target of the encoding is said to be. For process calculi, the celebrated results are that the synchronous π-calculus is more expressive than its asynchronous variant, has the same expressive power as the higher-order π-calculus, but is less than the ambient calculus. Using process calculus to model biological systems (stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, Brane calculus). It is thought by some that the compositionality offered by process-theoretic tools can help biologists to organise their knowledge more formally. == Software implementations == The ideas behind process algebra have given rise to several tools including: CADP Concurrency Workbench mCRL2 toolset == Relationship to other models of concurrency == The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star). The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model (see Actor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically. == See also == Communicating sequential processes ProVerif Stochastic probe Tamarin Prover Temporal Process Language π-calculus == References == == Further reading == Matthew Hennessy: Algebraic Theory of Processes, The MIT Press, ISBN 0-262-08171-7. C. A. R. Hoare: Communicating Sequential Processes, Prentice Hall, ISBN 0-13-153289-8. This book has been updated by Jim Davies at the Oxford University Computing Laboratory and the new edition is available for download as a PDF file at the Using CSP website. Robin Milner: A Calculus of Communicating Systems, Springer Verlag, ISBN 0-387-10235-3. Robin Milner: Communicating and Mobile Systems: the Pi-Calculus, Springer Verlag, ISBN 0-521-65869-1. Valk, Rüdiger; Moldt, Daniel; Köhler-Bußmeier, Michael, eds. (2011). "Chapter 5: Prozessalgebra - Parallele und kommunizierende Prozesse" (PDF). Formale Grundlagen der Informatik II: Modellierung und Analyse von Informatiksystemen (in German). Vol. Part 2. University of Hamburg. FGI2. Archived (PDF) from the original on 2019-07-09. Retrieved 2019-07-13. {{cite book}}: |work= ignored (help)
Wikipedia/Process_calculus
In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic. Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless. Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś. According to Howard Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century." == History == The history of nonstandard calculus began with the use of infinitely small quantities, called infinitesimals in calculus. The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s. John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted 1 ∞ {\displaystyle {\tfrac {1}{\infty }}} in area calculations, preparing the ground for integral calculus. They drew on the work of such mathematicians as Pierre de Fermat, Isaac Barrow and René Descartes. In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst. Several mathematicians, including Maclaurin and d'Alembert, advocated the use of limits. Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation. Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals. This approach formalized by Weierstrass came to be known as the standard calculus. After many years of the infinitesimal approach to calculus having fallen into disuse other than as an introductory pedagogical tool, use of infinitesimal quantities was finally given a rigorous foundation by Abraham Robinson in the 1960s. Robinson's approach is called nonstandard analysis to distinguish it from the standard use of limits. This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus. An alternative approach, developed by Edward Nelson, finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard". == Motivation == To calculate the derivative f ′ {\displaystyle f'} of the function y = f ( x ) = x 2 {\displaystyle y=f(x)=x^{2}} at x, both approaches agree on the algebraic manipulations: Δ y Δ x = ( x + Δ x ) 2 − x 2 Δ x = 2 x Δ x + ( Δ x ) 2 Δ x = 2 x + Δ x ≈ 2 x {\displaystyle {\frac {\Delta y}{\Delta x}}={\frac {(x+\Delta x)^{2}-x^{2}}{\Delta x}}={\frac {2x\Delta x+(\Delta x)^{2}}{\Delta x}}=2x+\Delta x\approx 2x} This becomes a computation of the derivatives using the hyperreals if Δ x {\displaystyle \Delta x} is interpreted as an infinitesimal and the symbol " ≈ {\displaystyle \approx } " is the relation "is infinitely close to". In order to make f ' a real-valued function, the final term Δ x {\displaystyle \Delta x} is dispensed with. In the standard approach using only real numbers, that is done by taking the limit as Δ x {\displaystyle \Delta x} tends to zero. In the hyperreal approach, the quantity Δ x {\displaystyle \Delta x} is taken to be an infinitesimal, a nonzero number that is closer to 0 than to any nonzero real. The manipulations displayed above then show that Δ y / Δ x {\displaystyle \Delta y/\Delta x} is infinitely close to 2x, so the derivative of f at x is then 2x. Discarding the "error term" is accomplished by an application of the standard part function. Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley. Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level. Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus", to quote a recent study. More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta. == Keisler's textbook == Keisler's Elementary Calculus: An Infinitesimal Approach defines continuity on page 125 in terms of infinitesimals, to the exclusion of epsilon, delta methods. The derivative is defined on page 45 using infinitesimals rather than an epsilon-delta approach. The integral is defined on page 183 in terms of infinitesimals. Epsilon, delta definitions are introduced on page 282. == Definition of derivative == The hyperreals can be constructed in the framework of Zermelo–Fraenkel set theory, the standard axiomatisation of set theory used elsewhere in mathematics. To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small, meaning that ε is smaller than any standard positive real, yet greater than zero. Every real number x is surrounded by an infinitesimal "cloud" of hyperreal numbers infinitely close to it. To define the derivative of f at a standard real number x in this approach, one no longer needs an infinite limiting process as in standard calculus. Instead, one sets f ′ ( x ) = s t ( f ∗ ( x + ε ) − f ∗ ( x ) ε ) , {\displaystyle f'(x)=\mathrm {st} \left({\frac {f^{*}(x+\varepsilon )-f^{*}(x)}{\varepsilon }}\right),} where st is the standard part function, yielding the real number infinitely close to the hyperreal argument of st, and f ∗ {\displaystyle f^{*}} is the natural extension of f {\displaystyle f} to the hyperreals. == Continuity == A real function f is continuous at a standard real number x if for every hyperreal x' infinitely close to x, the value f(x' ) is also infinitely close to f(x). This captures Cauchy's definition of continuity as presented in his 1821 textbook Cours d'Analyse, p. 34. Here to be precise, f would have to be replaced by its natural hyperreal extension usually denoted f*. Using the notation ≈ {\displaystyle \approx } for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows: A function f is microcontinuous at x if whenever x ′ ≈ x {\displaystyle x'\approx x} , one has f ∗ ( x ′ ) ≈ f ∗ ( x ) {\displaystyle f^{*}(x')\approx f^{*}(x)} Here the point x' is assumed to be in the domain of (the natural extension of) f. The above requires fewer quantifiers than the (ε, δ)-definition familiar from standard elementary calculus: f is continuous at x if for every ε > 0, there exists a δ > 0 such that for every x' , whenever |x − x' | < δ, one has |f(x) − f(x' )| < ε. == Uniform continuity == A function f on an interval I is uniformly continuous if its natural extension f* in I* has the following property: for every pair of hyperreals x and y in I*, if x ≈ y {\displaystyle x\approx y} then f ∗ ( x ) ≈ f ∗ ( y ) {\displaystyle f^{*}(x)\approx f^{*}(y)} . In terms of microcontinuity defined in the previous section, this can be stated as follows: a real function is uniformly continuous if its natural extension f* is microcontinuous at every point of the domain of f*. This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition. Namely, the epsilon-delta definition of uniform continuity requires four quantifiers, while the infinitesimal definition requires only two quantifiers. It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers. The hyperreal definition can be illustrated by the following three examples. Example 1: a function f is uniformly continuous on the semi-open interval (0,1], if and only if its natural extension f* is microcontinuous (in the sense of the formula above) at every positive infinitesimal, in addition to continuity at the standard points of the interval. Example 2: a function f is uniformly continuous on the semi-open interval [0,∞) if and only if it is continuous at the standard points of the interval, and in addition, the natural extension f* is microcontinuous at every positive infinite hyperreal point. Example 3: similarly, the failure of uniform continuity for the squaring function x 2 {\displaystyle x^{2}} is due to the absence of microcontinuity at a single infinite hyperreal point. Concerning quantifier complexity, the following remarks were made by Kevin Houston: The number of quantifiers in a mathematical statement gives a rough measure of the statement’s complexity. Statements involving three or more quantifiers can be difficult to understand. This is the main reason why it is hard to understand the rigorous definitions of limit, convergence, continuity and differentiability in analysis as they have many quantifiers. In fact, it is the alternation of the ∀ {\displaystyle \forall } and ∃ {\displaystyle \exists } that causes the complexity. Andreas Blass wrote as follows: Often ... the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers). == Compactness == A set A is compact if and only if its natural extension A* has the following property: every point in A* is infinitely close to a point of A. Thus, the open interval (0,1) is not compact because its natural extension contains positive infinitesimals which are not infinitely close to any positive real number. == Heine–Cantor theorem == The fact that a continuous function on a compact interval I is necessarily uniformly continuous (the Heine–Cantor theorem) admits a succinct hyperreal proof. Let x, y be hyperreals in the natural extension I* of I. Since I is compact, both st(x) and st(y) belong to I. If x and y were infinitely close, then by the triangle inequality, they would have the same standard part c = st ⁡ ( x ) = st ⁡ ( y ) . {\displaystyle c=\operatorname {st} (x)=\operatorname {st} (y).} Since the function is assumed continuous at c, f ( x ) ≈ f ( c ) ≈ f ( y ) , {\displaystyle f(x)\approx f(c)\approx f(y),} and therefore f(x) and f(y) are infinitely close, proving uniform continuity of f. == Why is the squaring function not uniformly continuous? == Let f(x) = x2 defined on R {\displaystyle \mathbb {R} } . Let N ∈ R ∗ {\displaystyle N\in \mathbb {R} ^{*}} be an infinite hyperreal. The hyperreal number N + 1 N {\displaystyle N+{\tfrac {1}{N}}} is infinitely close to N. Meanwhile, the difference f ( N + 1 N ) − f ( N ) = N 2 + 2 + 1 N 2 − N 2 = 2 + 1 N 2 {\displaystyle f(N+{\tfrac {1}{N}})-f(N)=N^{2}+2+{\tfrac {1}{N^{2}}}-N^{2}=2+{\tfrac {1}{N^{2}}}} is not infinitesimal. Therefore, f* fails to be microcontinuous at the hyperreal point N. Thus, the squaring function is not uniformly continuous, according to the definition in uniform continuity above. A similar proof may be given in the standard setting (Fitzpatrick 2006, Example 3.15). == Example: Dirichlet function == Consider the Dirichlet function I Q ( x ) := { 1 if x is rational , 0 if x is irrational . {\displaystyle I_{Q}(x):={\begin{cases}1&{\text{ if }}x{\text{ is rational}},\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}} It is well known that, under the standard definition of continuity, the function is discontinuous at every point. Let us check this in terms of the hyperreal definition of continuity above, for instance let us show that the Dirichlet function is not continuous at π. Consider the continued fraction approximation an of π. Now let the index n be an infinite hypernatural number. By the transfer principle, the natural extension of the Dirichlet function takes the value 1 at an. Note that the hyperrational point an is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values (0 and 1) at these two infinitely close points, and therefore the Dirichlet function is not continuous at π. == Limit == While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st, namely lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if whenever the difference x − a is infinitesimal, the difference f(x) − L is infinitesimal, as well, or in formulas: if st(x) = a then st(f(x)) = L, cf. (ε, δ)-definition of limit. == Limit of sequence == Given a sequence of real numbers { x n ∣ n ∈ N } {\displaystyle \{x_{n}\mid n\in \mathbb {N} \}} , if L ∈ R {\displaystyle L\in \mathbb {R} } L is the limit of the sequence and L = lim n → ∞ x n {\displaystyle L=\lim _{n\to \infty }x_{n}} if for every infinite hypernatural n, st(xn)=L (here the extension principle is used to define xn for every hyperinteger n). This definition has no quantifier alternations. The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations: L = lim n → ∞ x n ⟺ ∀ ε > 0 , ∃ N ∈ N , ∀ n ∈ N : n > N → | x n − L | < ε . {\displaystyle L=\lim _{n\to \infty }x_{n}\Longleftrightarrow \forall \varepsilon >0\;,\exists N\in \mathbb {N} \;,\forall n\in \mathbb {N} :n>N\rightarrow |x_{n}-L|<\varepsilon .} == Extreme value theorem == To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger. The interval [0, 1] has a natural hyperreal extension. The function f is also naturally extended to hyperreals between 0 and 1. Consider the partition of the hyperreal interval [0,1] into N subintervals of equal infinitesimal length 1/N, with partition points xi = i /N as i "runs" from 0 to N. In the standard setting (when N is finite), a point with the maximal value of f can always be chosen among the N+1 points xi, by induction. Hence, by the transfer principle, there is a hyperinteger i0 such that 0 ≤ i0 ≤ N and f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} for all i = 0, …, N (an alternative explanation is that every hyperfinite set admits a maximum). Consider the real point c = s t ( x i 0 ) {\displaystyle c={\rm {st}}(x_{i_{0}})} where st is the standard part function. An arbitrary real point x lies in a suitable sub-interval of the partition, namely x ∈ [ x i , x i + 1 ] {\displaystyle x\in [x_{i},x_{i+1}]} , so that st(xi) = x. Applying st to the inequality f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} , s t ( f ( x i 0 ) ) ≥ s t ( f ( x i ) ) {\displaystyle {\rm {st}}(f(x_{i_{0}}))\geq {\rm {st}}(f(x_{i}))} . By continuity of f, s t ( f ( x i 0 ) ) = f ( s t ( x i 0 ) ) = f ( c ) {\displaystyle {\rm {st}}(f(x_{i_{0}}))=f({\rm {st}}(x_{i_{0}}))=f(c)} . Hence f(c) ≥ f(x), for all x, proving c to be a maximum of the real function f. == Intermediate value theorem == As another illustration of the power of Robinson's approach, a short proof of the intermediate value theorem (Bolzano's theorem) using infinitesimals is done by the following. Let f be a continuous function on [a,b] such that f(a)<0 while f(b)>0. Then there exists a point c in [a,b] such that f(c)=0. The proof proceeds as follows. Let N be an infinite hyperinteger. Consider a partition of [a,b] into N intervals of equal length, with partition points xi as i runs from 0 to N. Consider the collection I of indices such that f(xi)>0. Let i0 be the least element in I (such an element exists by the transfer principle, as I is a hyperfinite set). Then the real number c = s t ( x i 0 ) {\displaystyle c=\mathrm {st} (x_{i_{0}})} is the desired zero of f. Such a proof reduces the quantifier complexity of a standard proof of the IVT. == Basic theorems == If f is a real valued function defined on an interval [a, b], then the transfer operator applied to f, denoted by *f, is an internal, hyperreal-valued function defined on the hyperreal interval [*a, *b]. Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is differentiable at a < x < b if and only if for every non-zero infinitesimal h, the value Δ h f := st ⁡ [ ∗ f ] ( x + h ) − [ ∗ f ] ( x ) h {\displaystyle \Delta _{h}f:=\operatorname {st} {\frac {[{}^{*}\!f](x+h)-[{}^{*}\!f](x)}{h}}} is independent of h. In that case, the common value is the derivative of f at x. This fact follows from the transfer principle of nonstandard analysis and overspill. Note that a similar result holds for differentiability at the endpoints a, b provided the sign of the infinitesimal h is suitably restricted. For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums; these are sums of the form ∑ k = 0 n − 1 f ( ξ k ) ( x k + 1 − x k ) {\displaystyle \sum _{k=0}^{n-1}f(\xi _{k})(x_{k+1}-x_{k})} where a = x 0 ≤ ξ 0 ≤ x 1 ≤ … x n − 1 ≤ ξ n − 1 ≤ x n = b . {\displaystyle a=x_{0}\leq \xi _{0}\leq x_{1}\leq \ldots x_{n-1}\leq \xi _{n-1}\leq x_{n}=b.} Such a sequence of values is called a partition or mesh and sup k ( x k + 1 − x k ) {\displaystyle \sup _{k}(x_{k+1}-x_{k})} the width of the mesh. In the definition of the Riemann integral, the limit of the Riemann sums is taken as the width of the mesh goes to 0. Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is Riemann-integrable on [a, b] if and only if for every internal mesh of infinitesimal width, the quantity S M = st ⁡ ∑ k = 0 n − 1 [ ∗ f ] ( ξ k ) ( x k + 1 − x k ) {\displaystyle S_{M}=\operatorname {st} \sum _{k=0}^{n-1}[*f](\xi _{k})(x_{k+1}-x_{k})} is independent of the mesh. In this case, the common value is the Riemann integral of f over [a, b]. == Applications == One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers. An internal hyperreal-valued function f on [a, b] is S-differentiable at x, provided Δ h f = st ⁡ f ( x + h ) − f ( x ) h {\displaystyle \Delta _{h}f=\operatorname {st} {\frac {f(x+h)-f(x)}{h}}} exists and is independent of the infinitesimal h. The value is the S derivative at x. Theorem: Suppose f is S-differentiable at every point of [a, b] where b − a is a bounded hyperreal. Suppose furthermore that | f ′ ( x ) | ≤ M a ≤ x ≤ b . {\displaystyle |f'(x)|\leq M\quad a\leq x\leq b.} Then for some infinitesimal ε | f ( b ) − f ( a ) | ≤ M ( b − a ) + ϵ . {\displaystyle |f(b)-f(a)|\leq M(b-a)+\epsilon .} To prove this, let N be a nonstandard natural number. Divide the interval [a, b] into N subintervals by placing N − 1 equally spaced intermediate points: a = x 0 < x 1 < ⋯ < x N − 1 < x N = b {\displaystyle a=x_{0}<x_{1}<\cdots <x_{N-1}<x_{N}=b} Then | f ( b ) − f ( a ) | ≤ ∑ k = 1 N − 1 | f ( x k + 1 ) − f ( x k ) | ≤ ∑ k = 1 N − 1 { | f ′ ( x k ) | + ϵ k } | x k + 1 − x k | . {\displaystyle |f(b)-f(a)|\leq \sum _{k=1}^{N-1}|f(x_{k+1})-f(x_{k})|\leq \sum _{k=1}^{N-1}\left\{|f'(x_{k})|+\epsilon _{k}\right\}|x_{k+1}-x_{k}|.} Now the maximum of any internal set of infinitesimals is infinitesimal. Thus all the εk's are dominated by an infinitesimal ε. Therefore, | f ( b ) − f ( a ) | ≤ ∑ k = 1 N − 1 ( M + ϵ ) ( x k + 1 − x k ) = M ( b − a ) + ϵ ( b − a ) {\displaystyle |f(b)-f(a)|\leq \sum _{k=1}^{N-1}(M+\epsilon )(x_{k+1}-x_{k})=M(b-a)+\epsilon (b-a)} from which the result follows. == See also == Adequality Archimedes' use of infinitesimals Criticism of nonstandard analysis Differential_(mathematics) Elementary Calculus: An Infinitesimal Approach Non-classical analysis History of calculus == Notes == == References == Fitzpatrick, Patrick (2006), Advanced Calculus, Brooks/Cole H. Jerome Keisler: Elementary Calculus: An Approach Using Infinitesimals. First edition 1976; 2nd edition 1986. (This book is now out of print. The publisher has reverted the copyright to the author, who has made available the 2nd edition in .pdf format available for downloading at http://www.math.wisc.edu/~keisler/calc.html.) H. Jerome Keisler: Foundations of Infinitesimal Calculus, available for downloading at http://www.math.wisc.edu/~keisler/foundations.html (10 jan '07) Blass, Andreas (1978), "Review: Martin Davis, Applied nonstandard analysis, and K. D. Stroyan and W. A. J. Luxemburg, Introduction to the theory of infinitesimals, and H. Jerome Keisler, Foundations of infinitesimal calculus", Bull. Amer. Math. Soc., 84 (1): 34–41, doi:10.1090/S0002-9904-1978-14401-2 Baron, Margaret E.: The origins of the infinitesimal calculus. Pergamon Press, Oxford-Edinburgh-New York 1969. Dover Publications, Inc., New York, 1987. (A new edition of Baron's book appeared in 2004) "Infinitesimal calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] == External links == Keisler, H. Jerome (2007). Elementary Calculus: An Infinitesimal Approach. Dover Publications. ISBN 978-0-48-648452-5. On-line version (2022) Henle, James M.; Kleinberg, Eugene M. (1979). Infinitesimal Calculus. Dover Publications. ISBN 978-0-48-642886-4. Infinitesimal Calculus at the Internet Archive Brief Calculus (2005, rev. 2015) by Benjamin Crowel. This short text is designed more for self-study or review than for classroom use. Infinitesimals are used when appropriate, and are treated more rigorously than in old books like Thompson's Calculus Made Easy, but in less detail than in Keisler's Elementary Calculus: An Approach Using Infinitesimals.
Wikipedia/Nonstandard_calculus
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator D {\displaystyle D} D f ( x ) = d d x f ( x ) , {\displaystyle Df(x)={\frac {d}{dx}}f(x)\,,} and of the integration operator J {\displaystyle J} J f ( x ) = ∫ 0 x f ( s ) d s , {\displaystyle Jf(x)=\int _{0}^{x}f(s)\,ds\,,} and developing a calculus for such operators generalizing the classical one. In this context, the term powers refers to iterative application of a linear operator D {\displaystyle D} to a function f {\displaystyle f} , that is, repeatedly composing D {\displaystyle D} with itself, as in D n ( f ) = ( D ∘ D ∘ D ∘ ⋯ ∘ D ⏟ n ) ( f ) = D ( D ( D ( ⋯ D ⏟ n ( f ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}D^{n}(f)&=(\underbrace {D\circ D\circ D\circ \cdots \circ D} _{n})(f)\\&=\underbrace {D(D(D(\cdots D} _{n}(f)\cdots ))).\end{aligned}}} For example, one may ask for a meaningful interpretation of D = D 1 2 {\displaystyle {\sqrt {D}}=D^{\scriptstyle {\frac {1}{2}}}} as an analogue of the functional square root for the differentiation operator, that is, an expression for some linear operator that, when applied twice to any function, will have the same effect as differentiation. More generally, one can look at the question of defining a linear operator D a {\displaystyle D^{a}} for every real number a {\displaystyle a} in such a way that, when a {\displaystyle a} takes an integer value n ∈ Z {\displaystyle n\in \mathbb {Z} } , it coincides with the usual n {\displaystyle n} -fold differentiation D {\displaystyle D} if n > 0 {\displaystyle n>0} , and with the n {\displaystyle n} -th power of J {\displaystyle J} when n < 0 {\displaystyle n<0} . One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operator D {\displaystyle D} is that the sets of operator powers { D a ∣ a ∈ R } {\displaystyle \{D^{a}\mid a\in \mathbb {R} \}} defined in this way are continuous semigroups with parameter a {\displaystyle a} , of which the original discrete semigroup of { D n ∣ n ∈ Z } {\displaystyle \{D^{n}\mid n\in \mathbb {Z} \}} for integer n {\displaystyle n} is a denumerable subgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics. Fractional differential equations, also known as extraordinary differential equations, are a generalization of differential equations through the application of fractional calculus. == Historical notes == In applied mathematics and mathematical analysis, a fractional derivative is a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written to Guillaume de l'Hôpital by Gottfried Wilhelm Leibniz in 1695. Around the same time, Leibniz wrote to Johann Bernoulli about derivatives of "general order". In the correspondence between Leibniz and John Wallis in 1697, Wallis's infinite product for π / 2 {\displaystyle \pi /2} is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notation d 1 / 2 y {\displaystyle {d}^{1/2}{y}} to denote the derivative of order ⁠1/2⁠. Fractional calculus was introduced in one of Niels Henrik Abel's early papers where all the elements can be found: the idea of fractional-order integration and differentiation, the mutually inverse relationship between them, the understanding that fractional-order differentiation and integration can be considered as the same generalized operation, and the unified notation for differentiation and integration of arbitrary real order. Independently, the foundations of the subject were laid by Liouville in a paper from 1832. Oliver Heaviside introduced the practical use of fractional differential operators in electrical transmission line analysis circa 1890. The theory and applications of fractional calculus expanded greatly over the 19th and 20th centuries, and numerous contributors have given different definitions for fractional derivatives and integrals. == Computing the fractional integral == Let f ( x ) {\displaystyle f(x)} be a function defined for x > 0 {\displaystyle x>0} . Form the definite integral from 0 to x {\displaystyle x} . Call this ( J f ) ( x ) = ∫ 0 x f ( t ) d t . {\displaystyle (Jf)(x)=\int _{0}^{x}f(t)\,dt\,.} Repeating this process gives ( J 2 f ) ( x ) = ∫ 0 x ( J f ) ( t ) d t = ∫ 0 x ( ∫ 0 t f ( s ) d s ) d t , {\displaystyle {\begin{aligned}\left(J^{2}f\right)(x)&=\int _{0}^{x}(Jf)(t)\,dt\\&=\int _{0}^{x}\left(\int _{0}^{t}f(s)\,ds\right)dt\,,\end{aligned}}} and this can be extended arbitrarily. The Cauchy formula for repeated integration, namely ( J n f ) ( x ) = 1 ( n − 1 ) ! ∫ 0 x ( x − t ) n − 1 f ( t ) d t , {\displaystyle \left(J^{n}f\right)(x)={\frac {1}{(n-1)!}}\int _{0}^{x}\left(x-t\right)^{n-1}f(t)\,dt\,,} leads in a straightforward way to a generalization for real n: using the gamma function to remove the discrete nature of the factorial function gives us a natural candidate for applications of the fractional integral operator as ( J α f ) ( x ) = 1 Γ ( α ) ∫ 0 x ( x − t ) α − 1 f ( t ) d t . {\displaystyle \left(J^{\alpha }f\right)(x)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}\left(x-t\right)^{\alpha -1}f(t)\,dt\,.} This is in fact a well-defined operator. It is straightforward to show that the J operator satisfies ( J α ) ( J β f ) ( x ) = ( J β ) ( J α f ) ( x ) = ( J α + β f ) ( x ) = 1 Γ ( α + β ) ∫ 0 x ( x − t ) α + β − 1 f ( t ) d t . {\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&=\left(J^{\beta }\right)\left(J^{\alpha }f\right)(x)\\&=\left(J^{\alpha +\beta }f\right)(x)\\&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-t\right)^{\alpha +\beta -1}f(t)\,dt\,.\end{aligned}}} This relationship is called the semigroup property of fractional differintegral operators. === Riemann–Liouville fractional integral === The classical form of fractional calculus is given by the Riemann–Liouville integral, which is essentially what has been described above. The theory of fractional integration for periodic functions (therefore including the "boundary condition" of repeating after a period) is given by the Weyl integral. It is defined on Fourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on the unit circle whose integrals evaluate to zero). The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval [a,b], the integrals are defined as D a D t − α ⁡ f ( t ) = I a I t α ⁡ f ( t ) = 1 Γ ( α ) ∫ a t ( t − τ ) α − 1 f ( τ ) d τ D t D b − α ⁡ f ( t ) = I t I b α ⁡ f ( t ) = 1 Γ ( α ) ∫ t b ( τ − t ) α − 1 f ( τ ) d τ {\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{-\alpha }}Df(t)&=\sideset {_{a}}{_{t}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau \\\sideset {_{t}}{_{b}^{-\alpha }}Df(t)&=\sideset {_{t}}{_{b}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{t}^{b}\left(\tau -t\right)^{\alpha -1}f(\tau )\,d\tau \end{aligned}}} Where the former is valid for t > a and the latter is valid for t < b. It has been suggested that the integral on the positive real axis (i.e. a = 0 {\displaystyle a=0} ) would be more appropriately named the Abel–Riemann integral, on the basis of history of discovery and use, and in the same vein the integral over the entire real line be named Liouville–Weyl integral. By contrast the Grünwald–Letnikov derivative starts with the derivative instead of the integral. === Hadamard fractional integral === The Hadamard fractional integral was introduced by Jacques Hadamard and is given by the following formula, D a D t − α ⁡ f ( t ) = 1 Γ ( α ) ∫ a t ( log ⁡ t τ ) α − 1 f ( τ ) d τ τ , t > a . {\displaystyle \sideset {_{a}}{_{t}^{-\alpha }}{\mathbf {D} }f(t)={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(\log {\frac {t}{\tau }}\right)^{\alpha -1}f(\tau ){\frac {d\tau }{\tau }},\qquad t>a\,.} === Atangana–Baleanu fractional integral (AB fractional integral) === The Atangana–Baleanu fractional integral of a continuous function is defined as: I A a AB I t α ⁡ f ( t ) = 1 − α AB ⁡ ( α ) f ( t ) + α AB ⁡ ( α ) Γ ( α ) ∫ a t ( t − τ ) α − 1 f ( τ ) d τ {\displaystyle \sideset {_{{\hphantom {A}}a}^{\operatorname {AB} }}{_{t}^{\alpha }}If(t)={\frac {1-\alpha }{\operatorname {AB} (\alpha )}}f(t)+{\frac {\alpha }{\operatorname {AB} (\alpha )\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau } == Fractional derivatives == Unfortunately, the comparable process for the derivative operator D is significantly more complex, but it can be shown that D is neither commutative nor additive in general. Unlike classical Newtonian derivatives, fractional derivatives can be defined in a variety of different ways that often do not all lead to the same result even for smooth functions. Some of these are defined via a fractional integral. Because of the incompatibility of definitions, it is frequently necessary to be explicit about which definition is used. === Riemann–Liouville fractional derivative === The corresponding derivative is calculated using Lagrange's rule for differential operators. To find the αth order derivative, the nth order derivative of the integral of order (n − α) is computed, where n is the smallest integer greater than α (that is, n = ⌈α⌉). The Riemann–Liouville fractional derivative and integral has multiple applications such as in case of solutions to the equation in the case of multiple systems such as the tokamak systems, and Variable order fractional parameter. Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants. D a D t α ⁡ f ( t ) = d n d t n D a D t − ( n − α ) ⁡ f ( t ) = d n d t n I a I t n − α ⁡ f ( t ) D t D b α ⁡ f ( t ) = d n d t n D t D b − ( n − α ) ⁡ f ( t ) = d n d t n I t I b n − α ⁡ f ( t ) {\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{n-\alpha }}If(t)\\\sideset {_{t}}{_{b}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{n-\alpha }}If(t)\end{aligned}}} === Caputo fractional derivative === Another option for computing fractional derivatives is the Caputo fractional derivative. It was introduced by Michele Caputo in his 1967 paper. In contrast to the Riemann–Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows, where again n = ⌈α⌉: D C D t α ⁡ f ( t ) = 1 Γ ( n − α ) ∫ 0 t f ( n ) ( τ ) ( t − τ ) α + 1 − n d τ . {\displaystyle \sideset {^{C}}{_{t}^{\alpha }}Df(t)={\frac {1}{\Gamma (n-\alpha )}}\int _{0}^{t}{\frac {f^{(n)}(\tau )}{\left(t-\tau \right)^{\alpha +1-n}}}\,d\tau .} There is the Caputo fractional derivative defined as: D ν f ( t ) = 1 Γ ( n − ν ) ∫ 0 t ( t − u ) ( n − ν − 1 ) f ( n ) ( u ) d u ( n − 1 ) < ν < n {\displaystyle D^{\nu }f(t)={\frac {1}{\Gamma (n-\nu )}}\int _{0}^{t}(t-u)^{(n-\nu -1)}f^{(n)}(u)\,du\qquad (n-1)<\nu <n} which has the advantage that it is zero when f(t) is constant and its Laplace Transform is expressed by means of the initial values of the function and its derivative. Moreover, there is the Caputo fractional derivative of distributed order defined as D a b D n ⁡ u ⁡ f ( t ) = ∫ a b ϕ ( ν ) [ D ( ν ) f ( t ) ] d ν = ∫ a b [ ϕ ( ν ) Γ ( 1 − ν ) ∫ 0 t ( t − u ) − ν f ′ ( u ) d u ] d ν {\displaystyle {\begin{aligned}\sideset {_{a}^{b}}{^{n}u}Df(t)&=\int _{a}^{b}\phi (\nu )\left[D^{(\nu )}f(t)\right]\,d\nu \\&=\int _{a}^{b}\left[{\frac {\phi (\nu )}{\Gamma (1-\nu )}}\int _{0}^{t}\left(t-u\right)^{-\nu }f'(u)\,du\right]\,d\nu \end{aligned}}} where ϕ(ν) is a weight function and which is used to represent mathematically the presence of multiple memory formalisms. === Caputo–Fabrizio fractional derivative === In a paper of 2015, M. Caputo and M. Fabrizio presented a definition of fractional derivative with a non singular kernel, for a function f ( t ) {\displaystyle f(t)} of C 1 {\displaystyle C^{1}} given by: D C a CF D t α ⁡ f ( t ) = 1 1 − α ∫ a t f ′ ( τ ) e ( − α t − τ 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {C}}a}^{\text{CF}}}{_{t}^{\alpha }}Df(t)={\frac {1}{1-\alpha }}\int _{a}^{t}f'(\tau )\ e^{\left(-\alpha {\frac {t-\tau }{1-\alpha }}\right)}\ d\tau ,} where a < 0 , α ∈ ( 0 , 1 ] {\displaystyle a<0,\alpha \in (0,1]} . === Atangana–Baleanu fractional derivative === In 2016, Atangana and Baleanu suggested differential operators based on the generalized Mittag-Leffler function E α {\displaystyle E_{\alpha }} . The aim was to introduce fractional differential operators with non-singular nonlocal kernel. Their fractional differential operators are given below in Riemann–Liouville sense and Caputo sense respectively. For a function f ( t ) {\displaystyle f(t)} of C 1 {\displaystyle C^{1}} given by D A B a ABC D t α ⁡ f ( t ) = AB ⁡ ( α ) 1 − α ∫ a t f ′ ( τ ) E α ( − α ( t − τ ) α 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}\int _{a}^{t}f'(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} If the function is continuous, the Atangana–Baleanu derivative in Riemann–Liouville sense is given by: D A B a ABC D t α ⁡ f ( t ) = AB ⁡ ( α ) 1 − α d d t ∫ a t f ( τ ) E α ( − α ( t − τ ) α 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}{\frac {d}{dt}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} The kernel used in Atangana–Baleanu fractional derivative has some properties of a cumulative distribution function. For example, for all α ∈ ( 0 , 1 ] {\displaystyle \alpha \in (0,1]} , the function E α {\displaystyle E_{\alpha }} is increasing on the real line, converges to 0 {\displaystyle 0} in − ∞ {\displaystyle -\infty } , and E α ( 0 ) = 1 {\displaystyle E_{\alpha }(0)=1} . Therefore, we have that, the function x ↦ 1 − E α ( − x α ) {\displaystyle x\mapsto 1-E_{\alpha }(-x^{\alpha })} is the cumulative distribution function of a probability measure on the positive real numbers. The distribution is therefore defined, and any of its multiples is called a Mittag-Leffler distribution of order α {\displaystyle \alpha } . It is also very well-known that, all these probability distributions are absolutely continuous. In particular, the function Mittag-Leffler has a particular case E 1 {\displaystyle E_{1}} , which is the exponential function, the Mittag-Leffler distribution of order 1 {\displaystyle 1} is therefore an exponential distribution. However, for α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} , the Mittag-Leffler distributions are heavy-tailed. Their Laplace transform is given by: E ( e − λ X α ) = 1 1 + λ α , {\displaystyle \mathbb {E} (e^{-\lambda X_{\alpha }})={\frac {1}{1+\lambda ^{\alpha }}},} This directly implies that, for α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} , the expectation is infinite. In addition, these distributions are geometric stable distributions. === Riesz derivative === The Riesz derivative is defined as F { ∂ α u ∂ | x | α } ( k ) = − | k | α F { u } ( k ) , {\displaystyle {\mathcal {F}}\left\{{\frac {\partial ^{\alpha }u}{\partial \left|x\right|^{\alpha }}}\right\}(k)=-\left|k\right|^{\alpha }{\mathcal {F}}\{u\}(k),} where F {\displaystyle {\mathcal {F}}} denotes the Fourier transform. === Conformable fractional derivative === The conformable fractional derivative of a function f {\displaystyle f} of order α {\displaystyle \alpha } is given by T a ( f ) ( t ) = lim ϵ → 0 f ( t + ϵ t 1 − α ) − f ( t ) ϵ {\displaystyle T_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}{\frac {f\left(t+\epsilon t^{1-\alpha }\right)-f(t)}{\epsilon }}} Unlike other definitions of the fractional derivative, the conformable fractional derivative obeys the product and quotient rule has analogs to Rolle's theorem and the mean value theorem. However, this fractional derivative produces significantly different results compared to the Riemann-Liouville and Caputo fractional derivative. In 2020, Feng Gao and Chunmei Chi defined the improved Caputo-type conformable fractional derivative, which more closely approximates the behavior of the Caputo fractional derivative: a C T ~ a ( f ) ( t ) = lim ϵ → 0 [ ( 1 − α ) ( f ( t ) − f ( a ) ) + α f ( t + ϵ ( t − a ) 1 − α ) − f ( t ) ϵ ] {\displaystyle _{a}^{C}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )(f(t)-f(a))+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]} where a {\displaystyle a} and t {\displaystyle t} are real numbers and a < t {\displaystyle a<t} . They also defined the improved Riemann-Liouville-type conformable fractional derivative to similarly approximate the Riemann-Liouville fractional derivative: a R L T ~ a ( f ) ( t ) = lim ϵ → 0 [ ( 1 − α ) f ( t ) + α f ( t + ϵ ( t − a ) 1 − α ) − f ( t ) ϵ ] {\displaystyle _{a}^{RL}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )f(t)+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]} where a {\displaystyle a} and t {\displaystyle t} are real numbers and a < t {\displaystyle a<t} . Both improved conformable fractional derivatives have analogs to Rolle's theorem and the interior extremum theorem. === Other types === Classical fractional derivatives include: Grünwald–Letnikov derivative Sonin–Letnikov derivative Liouville derivative Caputo derivative Hadamard derivative Marchaud derivative Riesz derivative Miller–Ross derivative Weyl derivative Erdélyi–Kober derivative F α {\displaystyle F^{\alpha }} -derivative New fractional derivatives include: Coimbra derivative Katugampola derivative Hilfer derivative Davidson derivative Chen derivative Caputo Fabrizio derivative Atangana–Baleanu derivative ==== Coimbra derivative ==== The Coimbra derivative is used for physical modeling: A number of applications in both mechanics and optics can be found in the works by Coimbra and collaborators, as well as additional applications to physical problems and numerical implementations studied in a number of works by other authors For q ( t ) < 1 {\displaystyle q(t)<1} a C D q ( t ) f ( t ) = 1 Γ [ 1 − q ( t ) ] ∫ 0 + t ( t − τ ) − q ( t ) d f ( τ ) d τ d τ + ( f ( 0 + ) − f ( 0 − ) ) t − q ( t ) Γ ( 1 − q ( t ) ) , {\displaystyle {\begin{aligned}^{\mathbb {C} }_{a}\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [1-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{-q(t)}{\frac {d\,f(\tau )}{d\tau }}d\tau \,+\,{\frac {(f(0^{+})-f(0^{-}))\,t^{-q(t)}}{\Gamma (1-q(t))}},\end{aligned}}} where the lower limit a {\displaystyle a} can be taken as either 0 − {\displaystyle 0^{-}} or − ∞ {\displaystyle -\infty } as long as f ( t ) {\displaystyle f(t)} is identically zero from or − ∞ {\displaystyle -\infty } to 0 − {\displaystyle 0^{-}} . Note that this operator returns the correct fractional derivatives for all values of t {\displaystyle t} and can be applied to either the dependent function itself f ( t ) {\displaystyle f(t)} with a variable order of the form q ( f ( t ) ) {\displaystyle q(f(t))} or to the independent variable with a variable order of the form q ( t ) {\displaystyle q(t)} . [ 1 ] {\displaystyle ^{[1]}} The Coimbra derivative can be generalized to any order, leading to the Coimbra Generalized Order Differintegration Operator (GODO) For q ( t ) < m {\displaystyle q(t)<m} − ∞ C D q ( t ) f ( t ) = 1 Γ [ m − q ( t ) ] ∫ 0 + t ( t − τ ) m − 1 − q ( t ) d m f ( τ ) d τ m d τ + ∑ n = 0 m − 1 ( d n f ( t ) d t n | 0 + − d n f ( t ) d t n | 0 − ) t n − q ( t ) Γ [ n + 1 − q ( t ) ] , {\displaystyle {\begin{aligned}^{\mathbb {\quad C} }_{\,\,-\infty }\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [m-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{m-1-q(t)}{\frac {d^{m}f(\tau )}{d\tau ^{m}}}d\tau \,+\,\sum _{n=0}^{m-1}{\frac {({\frac {d^{n}f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}f(t)}{dt^{n}}}|_{0^{-}})\,t^{n-q(t)}}{\Gamma [n+1-q(t)]}},\end{aligned}}} where m {\displaystyle m} is an integer larger than the larger value of q ( t ) {\displaystyle q(t)} for all values of t {\displaystyle t} . Note that the second (summation) term on the right side of the definition above can be expressed as 1 Γ [ m − q ( t ) ] ∑ n = 0 m − 1 { [ d n f ( t ) d t n | 0 + − d n f ( t ) d t n | 0 − ] t n − q ( t ) ∏ j = n + 1 m − 1 [ j − q ( t ) ] } {\displaystyle {\begin{aligned}{\frac {1}{\Gamma [m-q(t)]}}\sum _{n=0}^{m-1}\{[{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{-}}]\,t^{n-q(t)}\prod _{j=n+1}^{m-1}[j-q(t)]\}\end{aligned}}} so to keep the denominator on the positive branch of the Gamma ( Γ {\displaystyle \Gamma } ) function and for ease of numerical calculation. === Nature of the fractional derivative === The a {\displaystyle a} -th derivative of a function f {\displaystyle f} at a point x {\displaystyle x} is a local property only when a {\displaystyle a} is an integer; this is not the case for non-integer power derivatives. In other words, a non-integer fractional derivative of f {\displaystyle f} at x = c {\displaystyle x=c} depends on all values of f {\displaystyle f} , even those far away from c {\displaystyle c} . Therefore, it is expected that the fractional derivative operation involves some sort of boundary conditions, involving information on the function further out. The fractional derivative of a function of order a {\displaystyle a} is nowadays often defined by means of the Fourier or Mellin integral transforms. == Generalizations == === Erdélyi–Kober operator === The Erdélyi–Kober operator is an integral operator introduced by Arthur Erdélyi (1940). and Hermann Kober (1940) and is given by x − ν − α + 1 Γ ( α ) ∫ 0 x ( t − x ) α − 1 t − α − ν f ( t ) d t , {\displaystyle {\frac {x^{-\nu -\alpha +1}}{\Gamma (\alpha )}}\int _{0}^{x}\left(t-x\right)^{\alpha -1}t^{-\alpha -\nu }f(t)\,dt\,,} which generalizes the Riemann–Liouville fractional integral and the Weyl integral. == Functional calculus == In the context of functional analysis, functions f(D) more general than powers are studied in the functional calculus of spectral theory. The theory of pseudo-differential operators also allows one to consider powers of D. The operators arising are examples of singular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory of Riesz potentials. So there are a number of contemporary theories available, within which fractional calculus can be discussed. See also Erdélyi–Kober operator, important in special function theory (Kober 1940), (Erdélyi 1950–1951). == Applications == === Fractional conservation of mass === As described by Wheatcraft and Meerschaert (2008), a fractional conservation of mass equation is needed to model fluid flow when the control volume is not large enough compared to the scale of heterogeneity and when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is: − ρ ( ∇ α ⋅ u → ) = Γ ( α + 1 ) Δ x 1 − α ρ ( β s + ϕ β w ) ∂ p ∂ t {\displaystyle -\rho \left(\nabla ^{\alpha }\cdot {\vec {u}}\right)=\Gamma (\alpha +1)\Delta x^{1-\alpha }\rho \left(\beta _{s}+\phi \beta _{w}\right){\frac {\partial p}{\partial t}}} === Electrochemical analysis === When studying the redox behavior of a substrate in solution, a voltage is applied at an electrode surface to force electron transfer between electrode and substrate. The resulting electron transfer is measured as a current. The current depends upon the concentration of substrate at the electrode surface. As substrate is consumed, fresh substrate diffuses to the electrode as described by Fick's laws of diffusion. Taking the Laplace transform of Fick's second law yields an ordinary second-order differential equation (here in dimensionless form): d 2 d x 2 C ( x , s ) = s C ( x , s ) {\displaystyle {\frac {d^{2}}{dx^{2}}}C(x,s)=sC(x,s)} whose solution C(x,s) contains a one-half power dependence on s. Taking the derivative of C(x,s) and then the inverse Laplace transform yields the following relationship: d d x C ( x , t ) = d 1 2 d t 1 2 C ( x , t ) {\displaystyle {\frac {d}{dx}}C(x,t)={\frac {d^{\scriptstyle {\frac {1}{2}}}}{dt^{\scriptstyle {\frac {1}{2}}}}}C(x,t)} which relates the concentration of substrate at the electrode surface to the current. This relationship is applied in electrochemical kinetics to elucidate mechanistic behavior. For example, it has been used to study the rate of dimerization of substrates upon electrochemical reduction. === Groundwater flow problem === In 2013–2014 Atangana et al. described some groundwater flow problems using the concept of a derivative with fractional order. In these works, the classical Darcy law is generalized by regarding the water flow as a function of a non-integer order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow. === Fractional advection dispersion equation === This equation has been shown useful for modeling contaminant flow in heterogenous porous media. Atangana and Kilicman extended the fractional advection dispersion equation to a variable order equation. In their work, the hydrodynamic dispersion equation was generalized using the concept of a variational order derivative. The modified equation was numerically solved via the Crank–Nicolson method. The stability and convergence in numerical simulations showed that the modified equation is more reliable in predicting the movement of pollution in deformable aquifers than equations with constant fractional and integer derivatives === Time-space fractional diffusion equation models === Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models. The time derivative term corresponds to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as ∂ α u ∂ t α = − K ( − Δ ) β u . {\displaystyle {\frac {\partial ^{\alpha }u}{\partial t^{\alpha }}}=-K(-\Delta )^{\beta }u.} A simple extension of the fractional derivative is the variable-order fractional derivative, α and β are changed into α(x, t) and β(x, t). Its applications in anomalous diffusion modeling can be found in the reference. === Structural damping models === Fractional derivatives are used to model viscoelastic damping in certain types of materials like polymers. === PID controllers === Generalizing PID controllers to use fractional orders can increase their degree of freedom. The new equation relating the control variable u(t) in terms of a measured error value e(t) can be written as u ( t ) = K p e ( t ) + K i D t − α e ( t ) + K d D t β e ( t ) {\displaystyle u(t)=K_{\mathrm {p} }e(t)+K_{\mathrm {i} }D_{t}^{-\alpha }e(t)+K_{\mathrm {d} }D_{t}^{\beta }e(t)} where α and β are positive fractional orders and Kp, Ki, and Kd, all non-negative, denote the coefficients for the proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D). === Acoustic wave equations for complex media === The propagation of acoustical waves in complex media, such as in biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives: ∇ 2 u − 1 c 0 2 ∂ 2 u ∂ t 2 + τ σ α ∂ α ∂ t α ∇ 2 u − τ ϵ β c 0 2 ∂ β + 2 u ∂ t β + 2 = 0 . {\displaystyle \nabla ^{2}u-{\dfrac {1}{c_{0}^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}+\tau _{\sigma }^{\alpha }{\dfrac {\partial ^{\alpha }}{\partial t^{\alpha }}}\nabla ^{2}u-{\dfrac {\tau _{\epsilon }^{\beta }}{c_{0}^{2}}}{\dfrac {\partial ^{\beta +2}u}{\partial t^{\beta +2}}}=0\,.} See also Holm & Näsholm (2011) and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in Näsholm & Holm (2011b) and in the survey paper, as well as the Acoustic attenuation article. See Holm & Nasholm (2013) for a paper which compares fractional wave equations which model power-law attenuation. This book on power-law attenuation also covers the topic in more detail. Pandey and Holm gave a physical meaning to fractional differential equations by deriving them from physical principles and interpreting the fractional-order in terms of the parameters of the acoustical media, example in fluid-saturated granular unconsolidated marine sediments. Interestingly, Pandey and Holm derived Lomnitz's law in seismology and Nutting's law in non-Newtonian rheology using the framework of fractional calculus. Nutting's law was used to model the wave propagation in marine sediments using fractional derivatives. === Fractional Schrödinger equation in quantum theory === The fractional Schrödinger equation, a fundamental equation of fractional quantum mechanics, has the following form: i ℏ ∂ ψ ( r , t ) ∂ t = D α ( − ℏ 2 Δ ) α 2 ψ ( r , t ) + V ( r , t ) ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial \psi (\mathbf {r} ,t)}{\partial t}}=D_{\alpha }\left(-\hbar ^{2}\Delta \right)^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t)\,.} where the solution of the equation is the wavefunction ψ(r, t) – the quantum mechanical probability amplitude for the particle to have a given position vector r at any given time t, and ħ is the reduced Planck constant. The potential energy function V(r, t) depends on the system. Further, Δ = ∂ 2 ∂ r 2 {\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}} is the Laplace operator, and Dα is a scale constant with physical dimension [Dα] = J1 − α·mα·s−α = kg1 − α·m2 − α·sα − 2, (at α = 2, D 2 = 1 2 m {\textstyle D_{2}={\frac {1}{2m}}} for a particle of mass m), and the operator (−ħ2Δ)α/2 is the 3-dimensional fractional quantum Riesz derivative defined by ( − ℏ 2 Δ ) α 2 ψ ( r , t ) = 1 ( 2 π ℏ ) 3 ∫ d 3 p e i ℏ p ⋅ r | p | α φ ( p , t ) . {\displaystyle (-\hbar ^{2}\Delta )^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)={\frac {1}{(2\pi \hbar )^{3}}}\int d^{3}pe^{{\frac {i}{\hbar }}\mathbf {p} \cdot \mathbf {r} }|\mathbf {p} |^{\alpha }\varphi (\mathbf {p} ,t)\,.} The index α in the fractional Schrödinger equation is the Lévy index, 1 < α ≤ 2. ==== Variable-order fractional Schrödinger equation ==== As a natural generalization of the fractional Schrödinger equation, the variable-order fractional Schrödinger equation has been exploited to study fractional quantum phenomena: i ℏ ∂ ψ α ( r ) ( r , t ) ∂ t α ( r ) = ( − ℏ 2 Δ ) β ( t ) 2 ψ ( r , t ) + V ( r , t ) ψ ( r , t ) , {\displaystyle i\hbar {\frac {\partial \psi ^{\alpha (\mathbf {r} )}(\mathbf {r} ,t)}{\partial t^{\alpha (\mathbf {r} )}}}=\left(-\hbar ^{2}\Delta \right)^{\frac {\beta (t)}{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t),} where Δ = ∂ 2 ∂ r 2 {\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}} is the Laplace operator and the operator (−ħ2Δ)β(t)/2 is the variable-order fractional quantum Riesz derivative. == See also == Acoustic attenuation Autoregressive fractionally integrated moving average Initialized fractional calculus Nonlocal operator === Other fractional theories === Fractional-order system Fractional Fourier transform Prabhakar function == Notes == == References == == Further reading == === Articles regarding the history of fractional calculus === Debnath, L. (2004). "A brief historical introduction to fractional calculus". International Journal of Mathematical Education in Science and Technology. 35 (4): 487–501. doi:10.1080/00207390410001686571. S2CID 122198977. === Books === Miller, Kenneth S.; Ross, Bertram, eds. (1993). An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley & Sons. ISBN 978-0-471-58884-9. Samko, S.; Kilbas, A.A.; Marichev, O. (1993). Fractional Integrals and Derivatives: Theory and Applications. Taylor & Francis Books. ISBN 978-2-88124-864-1. Carpinteri, A.; Mainardi, F., eds. (1998). Fractals and Fractional Calculus in Continuum Mechanics. Springer-Verlag Telos. ISBN 978-3-211-82913-4. Igor Podlubny (27 October 1998). Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. Elsevier. ISBN 978-0-08-053198-4. Tarasov, V.E. (2010). Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media. Nonlinear Physical Science. Springer. doi:10.1007/978-3-642-14003-7. ISBN 978-3-642-14003-7. Li, Changpin; Cai, Min (2019). Theory and Numerical Approximations of Fractional Integrals and Derivatives. SIAM. doi:10.1137/1.9781611975888. ISBN 978-1-61197-587-1. == External links ==
Wikipedia/Fractional_calculus
In mathematics, generalized functions are objects extending the notion of functions on real or complex numbers. There is more than one recognized theory, for example the theory of distributions. Generalized functions are especially useful for treating discontinuous functions more like smooth functions, and describing discrete physical phenomena such as point charges. They are applied extensively, especially in physics and engineering. Important motivations have been the technical requirements of theories of partial differential equations and group representations. A common feature of some of the approaches is that they build on operator aspects of everyday, numerical functions. The early history is connected with some ideas on operational calculus, and some contemporary developments are closely related to Mikio Sato's algebraic analysis. == Some early history == In the mathematics of the nineteenth century, aspects of generalized function theory appeared, for example in the definition of the Green's function, in the Laplace transform, and in Riemann's theory of trigonometric series, which were not necessarily the Fourier series of an integrable function. These were disconnected aspects of mathematical analysis at the time. The intensive use of the Laplace transform in engineering led to the heuristic use of symbolic methods, called operational calculus. Since justifications were given that used divergent series, these methods were questionable from the point of view of pure mathematics. They are typical of later application of generalized function methods. An influential book on operational calculus was Oliver Heaviside's Electromagnetic Theory of 1899. When the Lebesgue integral was introduced, there was for the first time a notion of generalized function central to mathematics. An integrable function, in Lebesgue's theory, is equivalent to any other which is the same almost everywhere. That means its value at each point is (in a sense) not its most important feature. In functional analysis a clear formulation is given of the essential feature of an integrable function, namely the way it defines a linear functional on other functions. This allows a definition of weak derivative. During the late 1920s and 1930s further basic steps were taken. The Dirac delta function was boldly defined by Paul Dirac (an aspect of his scientific formalism); this was to treat measures, thought of as densities (such as charge density) like genuine functions. Sergei Sobolev, working in partial differential equation theory, defined the first rigorous theory of generalized functions in order to define weak solutions of partial differential equations (i.e. solutions which are generalized functions, but may not be ordinary functions). Others proposing related theories at the time were Salomon Bochner and Kurt Friedrichs. Sobolev's work was extended by Laurent Schwartz. == Schwartz distributions == The most definitive development was the theory of distributions developed by Laurent Schwartz, systematically working out the principle of duality for topological vector spaces. Its main rival in applied mathematics is mollifier theory, which uses sequences of smooth approximations (the 'James Lighthill' explanation). This theory was very successful and is still widely used, but suffers from the main drawback that distributions cannot usually be multiplied: unlike most classical function spaces, they do not form an algebra. For example, it is meaningless to square the Dirac delta function. Work of Schwartz from around 1954 showed this to be an intrinsic difficulty. == Algebras of generalized functions == Some solutions to the multiplication problem have been proposed. One is based on a simple definition of by Yu. V. Egorov (see also his article in Demidov's book in the book list below) that allows arbitrary operations on, and between, generalized functions. Another solution allowing multiplication is suggested by the path integral formulation of quantum mechanics. Since this is required to be equivalent to the Schrödinger theory of quantum mechanics which is invariant under coordinate transformations, this property must be shared by path integrals. This fixes all products of generalized functions as shown by H. Kleinert and A. Chervyakov. The result is equivalent to what can be derived from dimensional regularization. Several constructions of algebras of generalized functions have been proposed, among others those by Yu. M. Shirokov and those by E. Rosinger, Y. Egorov, and R. Robinson. In the first case, the multiplication is determined with some regularization of generalized function. In the second case, the algebra is constructed as multiplication of distributions. Both cases are discussed below. === Non-commutative algebra of generalized functions === The algebra of generalized functions can be built-up with an appropriate procedure of projection of a function F = F ( x ) {\displaystyle F=F(x)} to its smooth F s m o o t h {\displaystyle F_{\rm {smooth}}} and its singular F s i n g u l a r {\displaystyle F_{\rm {singular}}} parts. The product of generalized functions F {\displaystyle F} and G {\displaystyle G} appears as Such a rule applies to both the space of main functions and the space of operators which act on the space of the main functions. The associativity of multiplication is achieved; and the function signum is defined in such a way, that its square is unity everywhere (including the origin of coordinates). Note that the product of singular parts does not appear in the right-hand side of (1); in particular, δ ( x ) 2 = 0 {\displaystyle \delta (x)^{2}=0} . Such a formalism includes the conventional theory of generalized functions (without their product) as a special case. However, the resulting algebra is non-commutative: generalized functions signum and delta anticommute. Few applications of the algebra were suggested. === Multiplication of distributions === The problem of multiplication of distributions, a limitation of the Schwartz distribution theory, becomes serious for non-linear problems. Various approaches are used today. The simplest one is based on the definition of generalized function given by Yu. V. Egorov. Another approach to construct associative differential algebras is based on J.-F. Colombeau's construction: see Colombeau algebra. These are factor spaces G = M / N {\displaystyle G=M/N} of "moderate" modulo "negligible" nets of functions, where "moderateness" and "negligibility" refers to growth with respect to the index of the family. === Example: Colombeau algebra === A simple example is obtained by using the polynomial scale on N, s = { a m : N → R , n ↦ n m ; m ∈ Z } {\displaystyle s=\{a_{m}:\mathbb {N} \to \mathbb {R} ,n\mapsto n^{m};~m\in \mathbb {Z} \}} . Then for any semi normed algebra (E,P), the factor space will be G s ( E , P ) = { f ∈ E N ∣ ∀ p ∈ P , ∃ m ∈ Z : p ( f n ) = o ( n m ) } { f ∈ E N ∣ ∀ p ∈ P , ∀ m ∈ Z : p ( f n ) = o ( n m ) } . {\displaystyle G_{s}(E,P)={\frac {\{f\in E^{\mathbb {N} }\mid \forall p\in P,\exists m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}{\{f\in E^{\mathbb {N} }\mid \forall p\in P,\forall m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}}.} In particular, for (E, P)=(C,|.|) one gets (Colombeau's) generalized complex numbers (which can be "infinitely large" and "infinitesimally small" and still allow for rigorous arithmetics, very similar to nonstandard numbers). For (E, P) = (C∞(R),{pk}) (where pk is the supremum of all derivatives of order less than or equal to k on the ball of radius k) one gets Colombeau's simplified algebra. === Injection of Schwartz distributions === This algebra "contains" all distributions T of D' via the injection j(T) = (φn ∗ T)n + N, where ∗ is the convolution operation, and φn(x) = n φ(nx). This injection is non-canonical in the sense that it depends on the choice of the mollifier φ, which should be C∞, of integral one and have all its derivatives at 0 vanishing. To obtain a canonical injection, the indexing set can be modified to be N × D(R), with a convenient filter base on D(R) (functions of vanishing moments up to order q). === Sheaf structure === If (E,P) is a (pre-)sheaf of semi normed algebras on some topological space X, then Gs(E, P) will also have this property. This means that the notion of restriction will be defined, which allows to define the support of a generalized function w.r.t. a subsheaf, in particular: For the subsheaf {0}, one gets the usual support (complement of the largest open subset where the function is zero). For the subsheaf E (embedded using the canonical (constant) injection), one gets what is called the singular support, i.e., roughly speaking, the closure of the set where the generalized function is not a smooth function (for E = C∞). === Microlocal analysis === The Fourier transformation being (well-)defined for compactly supported generalized functions (component-wise), one can apply the same construction as for distributions, and define Lars Hörmander's wave front set also for generalized functions. This has an especially important application in the analysis of propagation of singularities. == Other theories == These include: the convolution quotient theory of Jan Mikusinski, based on the field of fractions of convolution algebras that are integral domains; and the theories of hyperfunctions, based (in their initial conception) on boundary values of analytic functions, and now making use of sheaf theory. == Topological groups == Bruhat introduced a class of test functions, the Schwartz–Bruhat functions, on a class of locally compact groups that goes beyond the manifolds that are the typical function domains. The applications are mostly in number theory, particularly to adelic algebraic groups. André Weil rewrote Tate's thesis in this language, characterizing the zeta distribution on the idele group; and has also applied it to the explicit formula of an L-function. == Generalized section == A further way in which the theory has been extended is as generalized sections of a smooth vector bundle. This is on the Schwartz pattern, constructing objects dual to the test objects, smooth sections of a bundle that have compact support. The most developed theory is that of De Rham currents, dual to differential forms. These are homological in nature, in the way that differential forms give rise to De Rham cohomology. They can be used to formulate a very general Stokes' theorem. == See also == Beppo-Levi space Dirac delta function Generalized eigenfunction Distribution (mathematics) Hyperfunction Laplacian of the indicator Rigged Hilbert space Limit of a distribution Generalized space Ultradistribution == Books == Schwartz, L. (1950). Théorie des distributions. Vol. 1. Paris: Hermann. OCLC 889264730. Vol. 2. OCLC 889391733 Beurling, A. (1961). On quasianalyticity and general distributions (multigraphed lectures). Summer Institute, Stanford University. OCLC 679033904. Gelʹfand, Izrailʹ Moiseevič; Vilenkin, Naum Jakovlevič (1964). Generalized Functions. Vol. I–VI. Academic Press. OCLC 728079644. Hörmander, L. (2015) [1990]. The Analysis of Linear Partial Differential Operators (2nd ed.). Springer. ISBN 978-3-642-61497-2. H. Komatsu, Introduction to the theory of distributions, Second edition, Iwanami Shoten, Tokyo, 1983. Colombeau, J.-F. (2000) [1983]. New Generalized Functions and Multiplication of Distributions. Elsevier. ISBN 978-0-08-087195-0. Vladimirov, V.S.; Drozhzhinov, Yu. N.; Zav’yalov, B.I. (2012) [1988]. Tauberian theorems for generalized functions. Springer. ISBN 978-94-009-2831-2. Oberguggenberger, M. (1992). Multiplication of distributions and applications to partial differential equations. Longman. ISBN 978-0-582-08733-0. OCLC 682138968. Morimoto, M. (1993). An introduction to Sato's hyperfunctions. American Mathematical Society. ISBN 978-0-8218-8767-7. Demidov, A.S. (2001). Generalized Functions in Mathematical Physics: Main Ideas and Concepts. Nova Science. ISBN 9781560729051. Grosser, M.; Kunzinger, M.; Oberguggenberger, Michael; Steinbauer, R. (2013) [2001]. Geometric theory of generalized functions with applications to general relativity. Springer. ISBN 978-94-015-9845-3. Estrada, R.; Kanwal, R. (2012). A distributional approach to asymptotics. Theory and applications (2nd ed.). Birkhäuser Boston. ISBN 978-0-8176-8130-2. Vladimirov, V.S. (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 978-0-415-27356-5. Kleinert, H. (2009). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets (5th ed.). World Scientific. ISBN 9789814273572. (online here). See Chapter 11 for products of generalized functions. Pilipovi, S.; Stankovic, B.; Vindas, J. (2012). Asymptotic behavior of generalized functions. World Scientific. ISBN 9789814366847. == References ==
Wikipedia/Generalized_function
The method of exhaustion (Latin: methodus exhaustionis) is a method of finding the area of a shape by inscribing inside it a sequence of polygons (one at a time) whose areas converge to the area of the containing shape. If the sequence is correctly constructed, the difference in area between the nth polygon and the containing shape will become arbitrarily small as n becomes large. As this difference becomes arbitrarily small, the possible values for the area of the shape are systematically "exhausted" by the lower bound areas successively established by the sequence members. The method of exhaustion typically required a form of proof by contradiction, known as reductio ad absurdum. This amounts to finding an area of a region by first comparing it to the area of a second region, which can be "exhausted" so that its area becomes arbitrarily close to the true area. The proof involves assuming that the true area is greater than the second area, proving that assertion false, assuming it is less than the second area, then proving that assertion false, too. == History == The idea originated in the late 5th century BC with Antiphon, although it is not entirely clear how well he understood it. The theory was made rigorous a few decades later by Eudoxus of Cnidus, who used it to calculate areas and volumes. It was later reinvented in China by Liu Hui in the 3rd century AD in order to find the area of a circle. The first use of the term was in 1647 by Gregory of Saint Vincent in Opus geometricum quadraturae circuli et sectionum. The method of exhaustion is seen as a precursor to the methods of calculus. The development of analytical geometry and rigorous integral calculus in the 17th-19th centuries subsumed the method of exhaustion so that it is no longer explicitly used to solve problems. An important alternative approach was Cavalieri's principle, also termed the method of indivisibles which eventually evolved into the infinitesimal calculus of Roberval, Torricelli, Wallis, Leibniz, and others. === Euclid === Euclid used the method of exhaustion to prove the following six propositions in the 12th book of his Elements. Proposition 2: The area of circles is proportional to the square of their diameters. Proposition 5: The volumes of two tetrahedra of the same height are proportional to the areas of their triangular bases. Proposition 10: The volume of a cone is a third of the volume of the corresponding cylinder which has the same base and height. Proposition 11: The volume of a cone (or cylinder) of the same height is proportional to the area of the base. Proposition 12: The volume of a cone (or cylinder) that is similar to another is proportional to the cube of the ratio of the diameters of the bases. Proposition 18: The volume of a sphere is proportional to the cube of its diameter. === Archimedes === Archimedes used the method of exhaustion as a way to compute the area inside a circle by filling the circle with a sequence of polygons with an increasing number of sides and a corresponding increase in area. The quotients formed by the area of these polygons divided by the square of the circle radius can be made arbitrarily close to π as the number of polygon sides becomes large, proving that the area inside the circle of radius r is πr2, π being defined as the ratio of the circumference to the diameter (C/d). He also provided the bounds 3 + 10/71 < π < 3 + 10/70, (giving a range of 1/497) by comparing the perimeters of the circle with the perimeters of the inscribed and circumscribed 96-sided regular polygons. Other results he obtained with the method of exhaustion included The area bounded by the intersection of a line and a parabola is 4/3 that of the triangle having the same base and height (the quadrature of the parabola); The area of an ellipse is proportional to a rectangle having sides equal to its major and minor axes; The volume of a sphere is 4 times that of a cone having a base of the same radius and height equal to this radius; The volume of a cylinder having a height equal to its diameter is 3/2 that of a sphere having the same diameter; The area bounded by one spiral rotation and a line is 1/3 that of the circle having a radius equal to the line segment length; Use of the method of exhaustion also led to the successful evaluation of an infinite geometric series (for the first time); == See also == The Method of Mechanical Theorems The Quadrature of the Parabola Trapezoidal rule Pythagorean Theorem == References ==
Wikipedia/Method_of_exhaustion
Classical mechanics is a physical theory describing the motion of objects such as projectiles, parts of machinery, spacecraft, planets, stars, and galaxies. The development of classical mechanics involved substantial change in the methods and philosophy of physics. The qualifier classical distinguishes this type of mechanics from physics developed after the revolutions in physics of the early 20th century, all of which revealed limitations in classical mechanics. The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on the 17th century foundational works of Sir Isaac Newton, and the mathematical methods invented by Newton, Gottfried Wilhelm Leibniz, Leonhard Euler and others to describe the motion of bodies under the influence of forces. Later, methods based on energy were developed by Euler, Joseph-Louis Lagrange, William Rowan Hamilton and others, leading to the development of analytical mechanics (which includes Lagrangian mechanics and Hamiltonian mechanics). These advances, made predominantly in the 18th and 19th centuries, extended beyond earlier works; they are, with some modification, used in all areas of modern physics. If the present state of an object that obeys the laws of classical mechanics is known, it is possible to determine how it will move in the future, and how it has moved in the past. Chaos theory shows that the long term predictions of classical mechanics are not reliable. Classical mechanics provides accurate results when studying objects that are not extremely massive and have speeds not approaching the speed of light. With objects about the size of an atom's diameter, it becomes necessary to use quantum mechanics. To describe velocities approaching the speed of light, special relativity is needed. In cases where objects become extremely massive, general relativity becomes applicable. Some modern sources include relativistic mechanics in classical physics, as representing the field in its most developed and accurate form. == Branches == === Traditional division === Classical mechanics was traditionally divided into three main branches. Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. Kinematics describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it. Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics. === Forces vs. energy === Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity. In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in tangent bundle space (the tangent bundle of the configuration space and sometimes called "state space"), and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space (the cotangent bundle of the configuration space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. === By region of application === Alternatively, a division can be made by region of application: Celestial mechanics, relating to stars, planets and other celestial bodies Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases). Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light. Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials. == Description of objects and their motion == For simplicity, classical mechanics often models real-world objects as point particles, that is, objects with negligible size. The motion of a point particle is determined by a small number of parameters: its position, mass, and the forces applied to it. Classical mechanics also describes the more complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass". (These generalizations/extensions are derived from Newton's laws, say, by decomposing a solid body into a collection of points.) In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The behavior of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle. Classical mechanics assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance). === Kinematics === The position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O. In cases where P is moving relative to O, r is defined as a function of t, time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval that is observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space. ==== Velocity and speed ==== The velocity, or the rate of change of displacement with time, is defined as the derivative of the position with respect to time: v = d r d t {\displaystyle \mathbf {v} ={\mathrm {d} \mathbf {r} \over \mathrm {d} t}\,\!} . In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as −10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each object respectively, then the velocity of the first object as seen by the second object is: u ′ = u − v . {\displaystyle \mathbf {u} '=\mathbf {u} -\mathbf {v} \,.} Similarly, the first object sees the velocity of the second object as: v ′ = v − u . {\displaystyle \mathbf {v'} =\mathbf {v} -\mathbf {u} \,.} When both objects are moving in the same direction, this equation can be simplified to: u ′ = ( u − v ) d . {\displaystyle \mathbf {u} '=(u-v)\mathbf {d} \,.} Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v . {\displaystyle u'=u-v\,.} ==== Acceleration ==== The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time): a = d v d t = d 2 r d t 2 . {\displaystyle \mathbf {a} ={\mathrm {d} \mathbf {v} \over \mathrm {d} t}={\mathrm {d^{2}} \mathbf {r} \over \mathrm {d} t^{2}}.} Acceleration represents the velocity's change over time. Velocity can change in magnitude, direction, or both. Occasionally, a decrease in the magnitude of velocity "v" is referred to as deceleration, but generally any change in the velocity over time, including deceleration, is referred to as acceleration. ==== Frames of reference ==== While the position, velocity and acceleration of a particle can be described with respect to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is an idealized frame of reference within which an object with zero net force acting upon it moves with a constant velocity; that is, it is either at rest or moving uniformly in a straight line. In an inertial frame Newton's law of motion, F = m a {\displaystyle F=ma} , is valid.: 185  Non-inertial reference frames accelerate in relation to another inertial frame. A body rotating with respect to an inertial frame is not an inertial frame. When viewed from an inertial frame, particles in the non-inertial frame appear to move in ways not explained by forces from existing fields in the reference frame. Hence, it appears that there are other forces that enter the equations of motion solely as a result of the relative acceleration. These forces are referred to as fictitious forces, inertia forces, or pseudo-forces. Consider two reference frames S and S'. For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is measured the same in all reference frames, if we require x = x' when t = 0, then the relation between the space-time coordinates of the same event observed from the reference frames S' and S, which are moving at a relative velocity u in the x direction, is: x ′ = x − t u , y ′ = y , z ′ = z , t ′ = t . {\displaystyle {\begin{aligned}x'&=x-tu,\\y'&=y,\\z'&=z,\\t'&=t.\end{aligned}}} This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light. The transformations have the following consequences: v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S) a′ = a (the acceleration of a particle is the same in any inertial reference frame) F′ = F (the force on a particle is the same in any inertial reference frame) the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics. For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force. == Newtonian mechanics == A force in physics is any action that causes an object's velocity to change; that is, to accelerate. A force originates from within a field, such as an electro-static field (caused by static electrical charges), electro-magnetic field (caused by moving charges), or gravitational field (caused by mass), among others. Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's second law": F = d p d t = d ( m v ) d t . {\displaystyle \mathbf {F} ={\mathrm {d} \mathbf {p} \over \mathrm {d} t}={\mathrm {d} (m\mathbf {v} ) \over \mathrm {d} t}.} The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is a = dv/dt, the second law can be written in the simplified and more familiar form: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} \,.} So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion. As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example: F R = − λ v , {\displaystyle \mathbf {F} _{\rm {R}}=-\lambda \mathbf {v} \,,} where λ is a positive constant, the negative sign states that the force is opposite the sense of the velocity. Then the equation of motion is − λ v = m a = m d v d t . {\displaystyle -\lambda \mathbf {v} =m\mathbf {a} =m{\mathrm {d} \mathbf {v} \over \mathrm {d} t}\,.} This can be integrated to obtain v = v 0 e − λ t / m {\displaystyle \mathbf {v} =\mathbf {v} _{0}e^{{-\lambda t}/{m}}} where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), and the particle is slowing down. This expression can be further integrated to obtain the position r of the particle as a function of time. Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces. === Work and energy === If a constant force F is applied to a particle that makes a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors: W = F ⋅ Δ r . {\displaystyle W=\mathbf {F} \cdot \Delta \mathbf {r} \,.} More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral W = ∫ C F ( r ) ⋅ d r . {\displaystyle W=\int _{C}\mathbf {F} (\mathbf {r} )\cdot \mathrm {d} \mathbf {r} \,.} If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative. The kinetic energy Ek of a particle of mass m travelling at speed v is given by E k = 1 2 m v 2 . {\displaystyle E_{\mathrm {k} }={\tfrac {1}{2}}mv^{2}\,.} For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles. The work–energy theorem states that for a particle of constant mass m, the total work W done on the particle as it moves from position r1 to r2 is equal to the change in kinetic energy Ek of the particle: W = Δ E k = E k 2 − E k 1 = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\Delta E_{\mathrm {k} }=E_{\mathrm {k_{2}} }-E_{\mathrm {k_{1}} }={\tfrac {1}{2}}m\left(v_{2}^{\,2}-v_{1}^{\,2}\right).} Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep: F = − ∇ E p . {\displaystyle \mathbf {F} =-\mathbf {\nabla } E_{\mathrm {p} }\,.} If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force F ⋅ Δ r = − ∇ E p ⋅ Δ r = − Δ E p . {\displaystyle \mathbf {F} \cdot \Delta \mathbf {r} =-\mathbf {\nabla } E_{\mathrm {p} }\cdot \Delta \mathbf {r} =-\Delta E_{\mathrm {p} }\,.} The decrease in the potential energy is equal to the increase in the kinetic energy − Δ E p = Δ E k ⇒ Δ ( E k + E p ) = 0 . {\displaystyle -\Delta E_{\mathrm {p} }=\Delta E_{\mathrm {k} }\Rightarrow \Delta (E_{\mathrm {k} }+E_{\mathrm {p} })=0\,.} This result is known as conservation of energy and states that the total energy, ∑ E = E k + E p , {\displaystyle \sum E=E_{\mathrm {k} }+E_{\mathrm {p} }\,,} is constant in time. It is often useful, because many commonly encountered forces are conservative. == Lagrangian mechanics == Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair ( M , L ) {\textstyle (M,L)} consisting of a configuration space M {\textstyle M} and a smooth function L {\textstyle L} within that space called a Lagrangian. For many systems, L = T − V , {\textstyle L=T-V,} where T {\textstyle T} and V {\displaystyle V} are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from L {\textstyle L} must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations. == Hamiltonian mechanics == Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics. In this formalism, the dynamics of a system are governed by Hamilton's equations, which express the time derivatives of position and momentum variables in terms of partial derivatives of a function called the Hamiltonian: d q d t = ∂ H ∂ p , d p d t = − ∂ H ∂ q . {\displaystyle {\frac {\mathrm {d} {\boldsymbol {q}}}{\mathrm {d} t}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}},\quad {\frac {\mathrm {d} {\boldsymbol {p}}}{\mathrm {d} t}}=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}.} The Hamiltonian is the Legendre transform of the Lagrangian, and in many situations of physical interest it is equal to the total energy of the system. == Limits of validity == Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form. When both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) is of use. QFT deals with small distances, and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. When treating large degrees of freedom at the macroscopic level, statistical mechanics becomes useful. Statistical mechanics describes the behavior of large (but countable) numbers of particles and their interactions as a whole at the macroscopic level. Statistical mechanics is mainly used in thermodynamics for systems that lie outside the bounds of the assumptions of classical thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. In case that objects become extremely heavy (i.e., their Schwarzschild radius is not negligibly small for a given application), deviations from Newtonian mechanics become apparent and can be quantified by using the parameterized post-Newtonian formalism. In that case, general relativity (GR) becomes applicable. However, until now there is no theory of quantum gravity unifying GR and QFT in the sense that it could be used when objects become extremely small and heavy.[4][5] === Newtonian approximation to special relativity === In special relativity, the momentum of a particle is given by p = m v 1 − v 2 c 2 , {\displaystyle \mathbf {p} ={\frac {m\mathbf {v} }{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\,,} where m is the particle's rest mass, v its velocity, v is the modulus of v, and c is the speed of light. If v is very small compared to c, v2/c2 is approximately zero, and so p ≈ m v . {\displaystyle \mathbf {p} \approx m\mathbf {v} \,.} Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light. For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by f = f c m 0 m 0 + T c 2 , {\displaystyle f=f_{\mathrm {c} }{\frac {m_{0}}{m_{0}+{\frac {T}{c^{2}}}}}\,,} where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage. === Classical approximation to quantum mechanics === The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is λ = h p {\displaystyle \lambda ={\frac {h}{p}}} where h is the Planck constant and p is the momentum. Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 V, had a wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory. More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits. Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies. == History == The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering, and technology. The development of classical mechanics lead to the development of many areas of mathematics.: 54  Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These later became decisive factors in forming modern science, and their early application came to be known as classical mechanics. In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore introduced the concept of "positional gravity" and the use of component forces. The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia nova, published in 1609. He concluded, based on Tycho Brahe's observations on the orbit of Mars, that the planet's orbits were ellipses. This break with ancient thought was happening around the same time that Galileo was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of that particular experiment is disputed, but he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion was derived from the results of such experiments and forms a cornerstone of classical mechanics. In 1673 Christiaan Huygens described in his Horologium Oscillatorium the first two laws of motion. The work is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics. Newton founded his principles of natural philosophy on three proposed laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given the proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica. Here they are distinguished from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provides the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets. Newton had previously invented the calculus; however, the Principia was formulated entirely in terms of long-established geometric methods in emulation of Euclid. Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) he maintained his own corpuscular theory of light. After Newton, classical mechanics became a principal field of study in mathematics as well as physics. Mathematical formulations progressively allowed finding solutions to a far greater number of problems. The first notable mathematical treatment was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton. Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems led to the special theory of relativity, often still considered a part of classical mechanics. A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics. Since the end of the 20th century, classical mechanics in physics has no longer been an independent theory. Instead, classical mechanics is now considered an approximate theory to the more general quantum mechanics. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard Model and its more modern extensions into a unified theory of everything. Classical mechanics is a theory useful for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields. == See also == == Notes == == References == == Further reading == Alonso, M.; Finn, J. (1992). Fundamental University Physics. Addison-Wesley. Feynman, Richard (1999). The Feynman Lectures on Physics. Perseus Publishing. ISBN 978-0-7382-0092-7. Feynman, Richard; Phillips, Richard (1998). Six Easy Pieces. Perseus Publishing. ISBN 978-0-201-32841-7. Goldstein, Herbert; Charles P. Poole; John L. Safko (2002). Classical Mechanics (3rd ed.). Addison Wesley. ISBN 978-0-201-65702-9. Kibble, Tom W.B.; Berkshire, Frank H. (2004). Classical Mechanics (5th ed.). Imperial College Press. ISBN 978-1-86094-424-6. Kleppner, D.; Kolenkow, R.J. (1973). An Introduction to Mechanics. McGraw-Hill. ISBN 978-0-07-035048-9. Landau, L.D.; Lifshitz, E.M. (1972). Course of Theoretical Physics, Vol. 1 – Mechanics. Franklin Book Company. ISBN 978-0-08-016739-8. Morin, David (2008). Introduction to Classical Mechanics: With Problems and Solutions (1st ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-87622-3. Gerald Jay Sussman; Jack Wisdom (2001). Structure and Interpretation of Classical Mechanics. MIT Press. ISBN 978-0-262-19455-6. O'Donnell, Peter J. (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1-4665-8839-4. Thornton, Stephen T.; Marion, Jerry B. (2003). Classical Dynamics of Particles and Systems (5th ed.). Brooks Cole. ISBN 978-0-534-40896-1. == External links == Crowell, Benjamin. Light and Matter (an introductory text, uses algebra with optional sections involving calculus) Fitzpatrick, Richard. Classical Mechanics (uses calculus) Hoiland, Paul (2004). Preferred Frames of Reference & Relativity Horbatsch, Marko, "Classical Mechanics Course Notes". Rosu, Haret C., "Classical Mechanics". Physics Education. 1999. [arxiv.org : physics/9909035] Shapiro, Joel A. (2003). Classical Mechanics Sussman, Gerald Jay & Wisdom, Jack & Mayer, Meinhard E. (2001). Structure and Interpretation of Classical Mechanics Tong, David. Classical Dynamics (Cambridge lecture notes on Lagrangian and Hamiltonian formalism) Kinematic Models for Design Digital Library (KMODDL) Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering. MIT OpenCourseWare 8.01: Classical Mechanics Free videos of actual course lectures with links to lecture notes, assignments and exams. Alejandro A. Torassa, On Classical Mechanics
Wikipedia/Classical_mechanics
In physics, a force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. In mechanics, force makes ideas like 'pushing' or 'pulling' mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F. Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In the case of multiple forces, if the net force on an extended body is zero the body is in equilibrium. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. == Development of the concept == Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.: 2–10 : 79  High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. == Pre-Newtonian concepts == Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force). == Newtonian mechanics == Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. === First law === Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.: 1–7  === Second law === According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.: 204–207  A modern statement of Newton's second law is a vector equation: F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} where p {\displaystyle \mathbf {p} } is the momentum of the system, and F {\displaystyle \mathbf {F} } is the net (vector sum) force.: 399  If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, F = d p d t = d ( m v ) d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}={\frac {\mathrm {d} \left(m\mathbf {v} \right)}{\mathrm {d} t}},} where m is the mass and v {\displaystyle \mathbf {v} } is the velocity.: 9-1,9-2  If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes F = m d v d t . {\displaystyle \mathbf {F} =m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}.} By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} .} === Third law === Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} is the force of body 1 on body 2 and F 2 , 1 {\displaystyle \mathbf {F} _{2,1}} that of body 2 on body 1, then F 1 , 2 = − F 2 , 1 . {\displaystyle \mathbf {F} _{1,2}=-\mathbf {F} _{2,1}.} This law is sometimes referred to as the action-reaction law, with F 1 , 2 {\displaystyle \mathbf {F} _{1,2}} called the action and − F 2 , 1 {\displaystyle -\mathbf {F} _{2,1}} the reaction. Newton's third law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: F 1 , 2 + F 2 , 1 = 0. {\displaystyle \mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.: 19-1  Combining Newton's second and third laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p 1 {\displaystyle \mathbf {p} _{1}} is the momentum of object 1 and p 2 {\displaystyle \mathbf {p} _{2}} the momentum of object 2, then d p 1 d t + d p 2 d t = F 1 , 2 + F 2 , 1 = 0. {\displaystyle {\frac {\mathrm {d} \mathbf {p} _{1}}{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {p} _{2}}{\mathrm {d} t}}=\mathbf {F} _{1,2}+\mathbf {F} _{2,1}=0.} Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.: ch.12  === Defining "force" === Some textbooks use Newton's second law as a definition of force. However, for the equation F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } for a constant mass m {\displaystyle m} to then have any predictive content, it must be combined with further information.: 12-1  Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference.: 59  The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways,: vii  which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. == Combining forces == Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.: 197  Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.: ch.12  Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.: ch.12  === Equilibrium === When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.: 566  Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.: 566  ==== Static ==== Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his three laws of motion.: ch.12  ==== Dynamic ==== Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.: ch.12  == Examples of forces in classical mechanics == Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. === Gravitational force or Gravity === What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g {\displaystyle \mathbf {g} } and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\displaystyle m} will experience a force: F = m g . {\displaystyle \mathbf {F} =m\mathbf {g} .} For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.: ch.12  Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\displaystyle m_{\oplus }} ) and the radius ( R ⊕ {\displaystyle R_{\oplus }} ) of the Earth to the gravitational acceleration: g = − G m ⊕ R ⊕ 2 r ^ , {\displaystyle \mathbf {g} =-{\frac {Gm_{\oplus }}{{R_{\oplus }}^{2}}}{\hat {\mathbf {r} }},} where the vector direction is given by r ^ {\displaystyle {\hat {\mathbf {r} }}} , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant G {\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\displaystyle m_{2}} is F = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where r {\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the Solar System until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. === Electromagnetic === The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges.: 519  The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.: 4-6–4-8  Thus the electric field anywhere in space is defined as E = F q , {\displaystyle \mathbf {E} ={\mathbf {F} \over {q}},} where q {\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\displaystyle q} due to electric and magnetic fields: F = q ( E + v × B ) , {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} where F {\displaystyle \mathbf {F} } is the electromagnetic force, E {\displaystyle \mathbf {E} } is the electric field at the body's location, B {\displaystyle \mathbf {B} } is the magnetic field, and v {\displaystyle \mathbf {v} } is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.: 482  The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. === Normal === When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects.: 264  The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.: ch.12  === Friction === Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.: 267  The static friction force ( F s f {\displaystyle \mathbf {F} _{\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\displaystyle \mu _{\mathrm {sf} }} ) multiplied by the normal force ( F N {\displaystyle \mathbf {F} _{\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality: 0 ≤ F s f ≤ μ s f F N . {\displaystyle 0\leq \mathbf {F} _{\mathrm {sf} }\leq \mu _{\mathrm {sf} }\mathbf {F} _{\mathrm {N} }.} The kinetic friction force ( F k f {\displaystyle F_{\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: F k f = μ k f F N , {\displaystyle \mathbf {F} _{\mathrm {kf} }=\mu _{\mathrm {kf} }\mathbf {F} _{\mathrm {N} },} where μ k f {\displaystyle \mu _{\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.: 267–271  === Tension === Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.: ch.12  === Spring === A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\displaystyle \Delta x} is the displacement, the force exerted by an ideal spring equals: F = − k Δ x , {\displaystyle \mathbf {F} =-k\Delta \mathbf {x} ,} where k {\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.: ch.12  === Centripetal === For an object in uniform circular motion, the net force acting on the object equals: F = − m v 2 r r ^ , {\displaystyle \mathbf {F} =-{\frac {mv^{2}}{r}}{\hat {\mathbf {r} }},} where m {\displaystyle m} is the mass of the object, v {\displaystyle v} is the velocity of the object and r {\displaystyle r} is the distance to the center of the circular path and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.: ch.12  === Continuum mechanics === Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: F V = − ∇ P , {\displaystyle {\frac {\mathbf {F} }{V}}=-\mathbf {\nabla } P,} where V {\displaystyle V} is the volume of the object in the fluid and P {\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.: ch.12  A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: F d = − b v , {\displaystyle \mathbf {F} _{\mathrm {d} }=-b\mathbf {v} ,} where: b {\displaystyle b} is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and v {\displaystyle \mathbf {v} } is the velocity of the object.: ch.12  More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as σ = F A , {\displaystyle \sigma ={\frac {F}{A}},} where A {\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.: 133–134 : 38-1–38-11  === Fictitious === There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.: ch.12  Because these forces are not genuine they are also referred to as "pseudo forces".: 12-11  In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. == Concepts derived from force == === Rotation and torque === Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F {\displaystyle \mathbf {F} } is defined relative to an arbitrary reference point as the cross product: τ = r × F , {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,} where r {\displaystyle \mathbf {r} } is the position vector of the force application point relative to the reference point.: 497  Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: τ = I α , {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},} where I {\displaystyle I} is the moment of inertia of the body α {\displaystyle {\boldsymbol {\alpha }}} is the angular acceleration of the body.: 502  This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.: 96–113  Equivalently, the differential form of Newton's second law provides an alternative definition of torque: τ = d L d t , {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {dt} }},} where L {\displaystyle \mathbf {L} } is the angular momentum of the particle. Newton's third law of motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. === Yank === The yank is defined as the rate of change of force: 131  Y = d F d t {\displaystyle \mathbf {Y} ={\frac {\mathrm {d} \mathbf {F} }{\mathrm {d} t}}} The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. === Kinematic integrals === Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}{\mathbf {F} \,\mathrm {d} t},} which by Newton's second law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:: 13-3  W = ∫ x 1 x 2 F ⋅ d x , {\displaystyle W=\int _{\mathbf {x} _{1}}^{\mathbf {x} _{2}}{\mathbf {F} \cdot {\mathrm {d} \mathbf {x} }},} which is equivalent to changes in kinetic energy (yielding the work energy theorem).: 13-3  Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x {\displaystyle d\mathbf {x} } in a time interval dt:: 13-2  d W = d W d x ⋅ d x = F ⋅ d x , {\displaystyle \mathrm {d} W={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot \mathrm {d} \mathbf {x} =\mathbf {F} \cdot \mathrm {d} \mathbf {x} ,} so P = d W d t = d W d x ⋅ d x d t = F ⋅ v , {\displaystyle P={\frac {\mathrm {d} W}{\mathrm {d} t}}={\frac {\mathrm {d} W}{\mathrm {d} \mathbf {x} }}\cdot {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}}=\mathbf {F} \cdot \mathbf {v} ,} with v = d x / d t {\displaystyle \mathbf {v} =\mathrm {d} \mathbf {x} /\mathrm {d} t} the velocity. === Potential energy === Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r ) {\displaystyle U(\mathbf {r} )} is defined as that field whose gradient is equal and opposite to the force produced at every point: F = − ∇ U . {\displaystyle \mathbf {F} =-\mathbf {\nabla } U.} Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.: ch.12  === Conservation === A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.: ch.12  Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r {\displaystyle \mathbf {r} } emanating from spherically symmetric potentials. Examples of this follow: For gravity: F g = − G m 1 m 2 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{g}}=-{\frac {Gm_{1}m_{2}}{r^{2}}}{\hat {\mathbf {r} }},} where G {\displaystyle G} is the gravitational constant, and m n {\displaystyle m_{n}} is the mass of object n. For electrostatic forces: F e = q 1 q 2 4 π ε 0 r 2 r ^ , {\displaystyle \mathbf {F} _{\text{e}}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}r^{2}}}{\hat {\mathbf {r} }},} where ε 0 {\displaystyle \varepsilon _{0}} is electric permittivity of free space, and q n {\displaystyle q_{n}} is the electric charge of object n. For spring forces: F s = − k r r ^ , {\displaystyle \mathbf {F} _{\text{s}}=-kr{\hat {\mathbf {r} }},} where k {\displaystyle k} is the spring constant.: ch.12  For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.: ch.12  The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.: ch.12  == Units == The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. See also Ton-force. == Revisions of the force concept == At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly. === Special theory of relativity === In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's second law, F = d p d t , {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}},} remains valid because it is a mathematical definition.: 855–876  But for momentum to be conserved at relativistic relative velocity, v {\displaystyle v} , momentum must be redefined as: p = m 0 v 1 − v 2 / c 2 , {\displaystyle \mathbf {p} ={\frac {m_{0}\mathbf {v} }{\sqrt {1-v^{2}/c^{2}}}},} where m 0 {\displaystyle m_{0}} is the rest mass and c {\displaystyle c} the speed of light. The expression relating force and acceleration for a particle with constant non-zero rest mass m {\displaystyle m} moving in the x {\displaystyle x} direction at velocity v {\displaystyle v} is:: 216  F = ( γ 3 m a x , γ m a y , γ m a z ) , {\displaystyle \mathbf {F} =\left(\gamma ^{3}ma_{x},\gamma ma_{y},\gamma ma_{z}\right),} where γ = 1 1 − v 2 / c 2 . {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}.} is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\displaystyle c} .: 26 : §15–8  If v {\displaystyle v} is very small compared to c {\displaystyle c} , then γ {\displaystyle \gamma } is very close to 1 and F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } is a close approximation. Even for use in relativity, one can restore the form of F μ = m A μ {\displaystyle F^{\mu }=mA^{\mu }} through the use of four-vectors. This relation is correct in relativity when F μ {\displaystyle F^{\mu }} is the four-force, m {\displaystyle m} is the invariant mass, and A μ {\displaystyle A^{\mu }} is the four-acceleration. The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below. === Quantum mechanics === Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability. === Quantum field theory === In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".: 199–128  While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. == Fundamental interactions == All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.: 12-11 : 359  The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. === Gravitational === Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact. Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force". === Electromagnetic === Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force. === Strong nuclear === There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.: 940  The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.: 232  === Weak nuclear === Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity.: 951  This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.: 201  == See also == Contact force – Force between two objects that are in physical contact Force control – Force control is given by the machine Force gauge – Instrument for measuring force Orders of magnitude (force) – Comparison of a wide range of physical forces Parallel force system – Situation in mechanical engineering Rigid body – Physical object which does not deform when forces or moments are exerted on it Specific force – Concept in physics == References == == External links == "Classical Mechanics, Week 2: Newton's Laws". MIT OpenCourseWare. Retrieved 2023-08-09. "Fundamentals of Physics I, Lecture 3: Newton's Laws of Motion". Open Yale Courses. Retrieved 2023-08-09.
Wikipedia/Force
Newton–Cartan theory (or geometrized Newtonian gravitation) is a geometrical re-formulation, as well as a generalization, of Newtonian gravity first introduced by Élie Cartan in 1923 and Kurt Friedrichs and later developed by G. Dautcourt, W. G. Dixon, P. Havas, H. Künzle, Andrzej Trautman, and others. In this re-formulation, the structural similarities between Newton's theory and Albert Einstein's general theory of relativity are readily seen, and it has been used by Cartan and Friedrichs to give a rigorous formulation of the way in which Newtonian gravity can be seen as a specific limit of general relativity, and by Jürgen Ehlers to extend this correspondence to specific solutions of general relativity. == Classical spacetimes == In Newton–Cartan theory, one starts with a smooth four-dimensional manifold M {\displaystyle M} and defines two (degenerate) metrics. A temporal metric t a b {\displaystyle t_{ab}} with signature ( 1 , 0 , 0 , 0 ) {\displaystyle (1,0,0,0)} , used to assign temporal lengths to vectors on M {\displaystyle M} and a spatial metric h a b {\displaystyle h^{ab}} with signature ( 0 , 1 , 1 , 1 ) {\displaystyle (0,1,1,1)} . One also requires that these two metrics satisfy a transversality (or "orthogonality") condition, h a b t b c = 0 {\displaystyle h^{ab}t_{bc}=0} . Thus, one defines a classical spacetime as an ordered quadruple ( M , t a b , h a b , ∇ ) {\displaystyle (M,t_{ab},h^{ab},\nabla )} , where t a b {\displaystyle t_{ab}} and h a b {\displaystyle h^{ab}} are as described, ∇ {\displaystyle \nabla } is a metrics-compatible covariant derivative operator; and the metrics satisfy the orthogonality condition. One might say that a classical spacetime is the analog of a relativistic spacetime ( M , g a b ) {\displaystyle (M,g_{ab})} , where g a b {\displaystyle g_{ab}} is a smooth Lorentzian metric on the manifold M {\displaystyle M} . == Geometric formulation of Poisson's equation == In Newton's theory of gravitation, Poisson's equation reads Δ U = 4 π G ρ {\displaystyle \Delta U=4\pi G\rho \,} where U {\displaystyle U} is the gravitational potential, G {\displaystyle G} is the gravitational constant and ρ {\displaystyle \rho } is the mass density. The weak equivalence principle motivates a geometric version of the equation of motion for a point particle in the potential U {\displaystyle U} m t x → ¨ = − m g ∇ → U {\displaystyle m_{t}\,{\ddot {\vec {x}}}=-m_{g}{\vec {\nabla }}U} where m t {\displaystyle m_{t}} is the inertial mass and m g {\displaystyle m_{g}} the gravitational mass. Since, according to the weak equivalence principle m t = m g {\displaystyle m_{t}=m_{g}} , the corresponding equation of motion x → ¨ = − ∇ → U {\displaystyle {\ddot {\vec {x}}}=-{\vec {\nabla }}U} no longer contains a reference to the mass of the particle. Following the idea that the solution of the equation then is a property of the curvature of space, a connection is constructed so that the geodesic equation d 2 x λ d s 2 + Γ μ ν λ d x μ d s d x ν d s = 0 {\displaystyle {\frac {d^{2}x^{\lambda }}{ds^{2}}}+\Gamma _{\mu \nu }^{\lambda }{\frac {dx^{\mu }}{ds}}{\frac {dx^{\nu }}{ds}}=0} represents the equation of motion of a point particle in the potential U {\displaystyle U} . The resulting connection is Γ μ ν λ = γ λ ρ U , ρ Ψ μ Ψ ν {\displaystyle \Gamma _{\mu \nu }^{\lambda }=\gamma ^{\lambda \rho }U_{,\rho }\Psi _{\mu }\Psi _{\nu }} with Ψ μ = δ μ 0 {\displaystyle \Psi _{\mu }=\delta _{\mu }^{0}} and γ μ ν = δ A μ δ B ν δ A B {\displaystyle \gamma ^{\mu \nu }=\delta _{A}^{\mu }\delta _{B}^{\nu }\delta ^{AB}} ( A , B = 1 , 2 , 3 {\displaystyle A,B=1,2,3} ). The connection has been constructed in one inertial system but can be shown to be valid in any inertial system by showing the invariance of Ψ μ {\displaystyle \Psi _{\mu }} and γ μ ν {\displaystyle \gamma ^{\mu \nu }} under Galilei-transformations. The Riemann curvature tensor in inertial system coordinates of this connection is then given by R κ μ ν λ = 2 γ λ σ U , σ [ μ Ψ ν ] Ψ κ {\displaystyle R_{\kappa \mu \nu }^{\lambda }=2\gamma ^{\lambda \sigma }U_{,\sigma [\mu }\Psi _{\nu ]}\Psi _{\kappa }} where the brackets A [ μ ν ] = 1 2 ! [ A μ ν − A ν μ ] {\displaystyle A_{[\mu \nu ]}={\frac {1}{2!}}[A_{\mu \nu }-A_{\nu \mu }]} mean the antisymmetric combination of the tensor A μ ν {\displaystyle A_{\mu \nu }} . The Ricci tensor is given by R κ ν = Δ U Ψ κ Ψ ν {\displaystyle R_{\kappa \nu }=\Delta U\Psi _{\kappa }\Psi _{\nu }\,} which leads to following geometric formulation of Poisson's equation R μ ν = 4 π G ρ Ψ μ Ψ ν {\displaystyle R_{\mu \nu }=4\pi G\rho \Psi _{\mu }\Psi _{\nu }} More explicitly, if the roman indices i and j range over the spatial coordinates 1, 2, 3, then the connection is given by Γ 00 i = U , i {\displaystyle \Gamma _{00}^{i}=U_{,i}} the Riemann curvature tensor by R 0 j 0 i = − R 00 j i = U , i j {\displaystyle R_{0j0}^{i}=-R_{00j}^{i}=U_{,ij}} and the Ricci tensor and Ricci scalar by R = R 00 = Δ U {\displaystyle R=R_{00}=\Delta U} where all components not listed equal zero. Note that this formulation does not require introducing the concept of a metric: the connection alone gives all the physical information. == Bargmann lift == It was shown that four-dimensional Newton–Cartan theory of gravitation can be reformulated as Kaluza–Klein reduction of five-dimensional Einstein gravity along a null-like direction. This lifting is considered to be useful for non-relativistic holographic models. == References == == Bibliography == Cartan, Élie (1923), "Sur les variétés à connexion affine et la théorie de la relativité généralisée (Première partie)" (PDF), Annales Scientifiques de l'École Normale Supérieure, 40: 325, doi:10.24033/asens.751 Cartan, Élie (1924), "Sur les variétés à connexion affine, et la théorie de la relativité généralisée (Première partie) (Suite)" (PDF), Annales Scientifiques de l'École Normale Supérieure, 41: 1, doi:10.24033/asens.753 Cartan, Élie (1955), Œuvres complètes, vol. III/1, Gauthier-Villars, pp. 659, 799 Renn, Jürgen; Schemmel, Matthias, eds. (2007), The Genesis of General Relativity, vol. 4, Springer, pp. 1107–1129 (English translation of Ann. Sci. Éc. Norm. Supér. #40 paper) Chapter 1 of Ehlers, Jürgen (1973), "Survey of general relativity theory", in Israel, Werner (ed.), Relativity, Astrophysics and Cosmology, D. Reidel, pp. 1–125, ISBN 90-277-0369-8
Wikipedia/Newton–Cartan_theory
The Schrödinger–Newton equation, sometimes referred to as the Newton–Schrödinger or Schrödinger–Poisson equation, is a nonlinear modification of the Schrödinger equation with a Newtonian gravitational potential, where the gravitational potential emerges from the treatment of the wave function as a mass density, including a term that represents interaction of a particle with its own gravitational field. The inclusion of a self-interaction term represents a fundamental alteration of quantum mechanics. It can be written either as a single integro-differential equation or as a coupled system of a Schrödinger and a Poisson equation. In the latter case it is also referred to in the plural form. The Schrödinger–Newton equation was first considered by Ruffini and Bonazzola in connection with self-gravitating boson stars. In this context of classical general relativity it appears as the non-relativistic limit of either the Klein–Gordon equation or the Dirac equation in a curved space-time together with the Einstein field equations. The equation also describes fuzzy dark matter and approximates classical cold dark matter described by the Vlasov–Poisson equation in the limit that the particle mass is large. Later on it was proposed as a model to explain the quantum wave function collapse by Lajos Diósi and Roger Penrose, from whom the name "Schrödinger–Newton equation" originates. In this context, matter has quantum properties, while gravity remains classical even at the fundamental level. The Schrödinger–Newton equation was therefore also suggested as a way to test the necessity of quantum gravity. In a third context, the Schrödinger–Newton equation appears as a Hartree approximation for the mutual gravitational interaction in a system of a large number of particles. In this context, a corresponding equation for the electromagnetic Coulomb interaction was suggested by Philippe Choquard at the 1976 Symposium on Coulomb Systems in Lausanne to describe one-component plasmas. Elliott H. Lieb provided the proof for the existence and uniqueness of a stationary ground state and referred to the equation as the Choquard equation. == Overview == As a coupled system, the Schrödinger–Newton equations are the usual Schrödinger equation with a self-interaction gravitational potential i ℏ ∂ Ψ ∂ t = − ℏ 2 2 m ∇ 2 Ψ + V Ψ + m Φ Ψ , {\displaystyle i\hbar \ {\frac {\partial \Psi }{\ \partial t\ }}=-{\frac {\ \hbar ^{2}}{\ 2\ m\ }}\ \nabla ^{2}\Psi \;+\;V\ \Psi \;+\;m\ \Phi \ \Psi \ ,} where  V  is an ordinary potential, and the gravitational potential Φ , {\displaystyle \ \Phi \ ,} representing the interaction of the particle with its own gravitational field, satisfies the Poisson equation ∇ 2 Φ = 4 π G m | Ψ | 2 . {\displaystyle \ \nabla ^{2}\Phi =4\pi \ G\ m\ |\Psi |^{2}~.} Because of the back coupling of the wave-function into the potential, it is a nonlinear system. Replacing Φ {\displaystyle \ \Phi \ } with the solution to the Poisson equation produces the integro-differential form of the Schrödinger–Newton equation: i ℏ ∂ Ψ ∂ t = [ − ℏ 2 2 m ∇ 2 + V − G m 2 ∫ | Ψ ( t , y ) | 2 | x − y | d 3 y ] Ψ . {\displaystyle i\hbar \ {\frac {\ \partial \Psi \ }{\partial t}}=\left[\ -{\frac {\ \hbar ^{2}}{\ 2\ m\ }}\ \nabla ^{2}\;+\;V\;-\;G\ m^{2}\int {\frac {\ |\Psi (t,\mathbf {y} )|^{2}}{\ |\mathbf {x} -\mathbf {y} |\ }}\;\mathrm {d} ^{3}\mathbf {y} \ \right]\Psi ~.} It is obtained from the above system of equations by integration of the Poisson equation under the assumption that the potential must vanish at infinity. Mathematically, the Schrödinger–Newton equation is a special case of the Hartree equation for n = 2 . The equation retains most of the properties of the linear Schrödinger equation. In particular, it is invariant under constant phase shifts, leading to conservation of probability and exhibits full Galilei invariance. In addition to these symmetries, a simultaneous transformation m → μ m , t → μ − 5 t , x → μ − 3 x , ψ ( t , x ) → μ 9 / 2 ψ ( μ 5 t , μ 3 x ) {\displaystyle m\to \mu \ m\ ,\qquad t\to \mu ^{-5}t\ ,\qquad \mathbf {x} \to \mu ^{-3}\mathbf {x} \ ,\qquad \psi (t,\mathbf {x} )\to \mu ^{9/2}\psi (\mu ^{5}t,\mu ^{3}\mathbf {x} )} maps solutions of the Schrödinger–Newton equation to solutions. The stationary equation, which can be obtained in the usual manner via a separation of variables, possesses an infinite family of normalisable solutions of which only the stationary ground state is stable. === Relation to semi-classical and quantum gravity === The Schrödinger–Newton equation can be derived under the assumption that gravity remains classical, even at the fundamental level, and that the right way to couple quantum matter to gravity is by means of the semiclassical Einstein equations. In this case, a Newtonian gravitational potential term is added to the Schrödinger equation, where the source of this gravitational potential is the expectation value of the mass density operator or mass flux-current. In this regard, if gravity is fundamentally classical, the Schrödinger–Newton equation is a fundamental one-particle equation, which can be generalised to the case of many particles (see below). If, on the other hand, the gravitational field is quantised, the fundamental Schrödinger equation remains linear. The Schrödinger–Newton equation is then only valid as an approximation for the gravitational interaction in systems of a large number of particles and has no effect on the centre of mass. == Many-body equation and centre-of-mass motion == If the Schrödinger–Newton equation is considered as a fundamental equation, there is a corresponding N-body equation that was already given by Diósi and can be derived from semiclassical gravity in the same way as the one-particle equation: i ℏ ∂ ∂ t Ψ ( t , x 1 , … , x N ) = ( − ∑ j = 1 N ℏ 2 2 m j ∇ j 2 + ∑ j ≠ k V j k ( | x j − x k | ) − G ∑ j , k = 1 N m j m k ∫ d 3 y 1 ⋯ d 3 y N | Ψ ( t , y 1 , … , y N ) | 2 | x j − y k | ) Ψ ( t , x 1 , … , x N ) . {\displaystyle {\begin{aligned}i\hbar {\frac {\partial }{\partial t}}\Psi (t,\mathbf {x} _{1},\dots ,\mathbf {x} _{N})={\bigg (}&-\sum _{j=1}^{N}{\frac {\hbar ^{2}}{2m_{j}}}\nabla _{j}^{2}+\sum _{j\neq k}V_{jk}{\big (}|\mathbf {x} _{j}-\mathbf {x} _{k}|{\big )}\\&-G\sum _{j,k=1}^{N}m_{j}m_{k}\int \mathrm {d} ^{3}\mathbf {y} _{1}\cdots \mathrm {d} ^{3}\mathbf {y} _{N}\,{\frac {|\Psi (t,\mathbf {y} _{1},\dots ,\mathbf {y} _{N})|^{2}}{|\mathbf {x} _{j}-\mathbf {y} _{k}|}}{\bigg )}\Psi (t,\mathbf {x} _{1},\dots ,\mathbf {x} _{N}).\end{aligned}}} The potential V j k {\displaystyle V_{jk}} contains all the mutual linear interactions, e.g. electrodynamical Coulomb interactions, while the gravitational-potential term is based on the assumption that all particles perceive the same gravitational potential generated by all the marginal distributions for all the particles together. In a Born–Oppenheimer-like approximation, this N-particle equation can be separated into two equations, one describing the relative motion, the other providing the dynamics of the centre-of-mass wave-function. For the relative motion, the gravitational interaction does not play a role, since it is usually weak compared to the other interactions represented by V j k {\displaystyle V_{jk}} . But it has a significant influence on the centre-of-mass motion. While V j k {\displaystyle V_{jk}} only depends on relative coordinates and therefore does not contribute to the centre-of-mass dynamics at all, the nonlinear Schrödinger–Newton interaction does contribute. In the aforementioned approximation, the centre-of-mass wave-function satisfies the following nonlinear Schrödinger equation: i ℏ ∂ ψ c ( t , R ) ∂ t = ( ℏ 2 2 M ∇ 2 − G ∫ d 3 R ′ ∫ d 3 y ∫ d 3 z | ψ c ( t , R ′ ) | 2 ρ c ( y ) ρ c ( z ) | R − R ′ − y + z | ) ψ c ( t , R ) , {\displaystyle i\hbar {\frac {\partial \psi _{c}(t,\mathbf {R} )}{\partial t}}=\left({\frac {\hbar ^{2}}{2M}}\nabla ^{2}-G\int \mathrm {d} ^{3}\mathbf {R'} \,\int \mathrm {d} ^{3}\mathbf {y} \,\int \mathrm {d} ^{3}\mathbf {z} \,{\frac {|\psi _{c}(t,\mathbf {R'} )|^{2}\,\rho _{c}(\mathbf {y} )\rho _{c}(\mathbf {z} )}{\left|\mathbf {R} -\mathbf {R'} -\mathbf {y} +\mathbf {z} \right|}}\right)\psi _{c}(t,\mathbf {R} ),} where M is the total mass, R is the relative coordinate, ψ c {\displaystyle \psi _{c}} the centre-of-mass wave-function, and ρ c {\displaystyle \rho _{c}} is the mass density of the many-body system (e.g. a molecule or a rock) relative to its centre of mass. In the limiting case of a wide wave-function, i.e. where the width of the centre-of-mass distribution is large compared to the size of the considered object, the centre-of-mass motion is approximated well by the Schrödinger–Newton equation for a single particle. The opposite case of a narrow wave-function can be approximated by a harmonic-oscillator potential, where the Schrödinger–Newton dynamics leads to a rotation in phase space. In the context where the Schrödinger–Newton equation appears as a Hartree approximation, the situation is different. In this case the full N-particle wave-function is considered a product state of N single-particle wave-functions, where each of those factors obeys the Schrödinger–Newton equation. The dynamics of the centre-of-mass, however, remain strictly linear in this picture. This is true in general: nonlinear Hartree equations never have an influence on the centre of mass. == Significance of effects == A rough order-of-magnitude estimate of the regime where effects of the Schrödinger–Newton equation become relevant can be obtained by a rather simple reasoning. For a spherically symmetric Gaussian, Ψ ( t = 0 , r ) = ( π σ 2 ) − 3 / 4 exp ⁡ ( − r 2 2 σ 2 ) , {\displaystyle \Psi (t=0,r)=(\pi \sigma ^{2})^{-3/4}\exp \left(-{\frac {r^{2}}{2\sigma ^{2}}}\right),} the free linear Schrödinger equation has the solution Ψ ( t , r ) = ( π σ 2 ) − 3 / 4 ( 1 + i ℏ t m σ 2 ) − 3 / 2 exp ⁡ ( − r 2 2 σ 2 ( 1 + i ℏ t m σ 2 ) ) . {\displaystyle \Psi (t,r)=(\pi \sigma ^{2})^{-3/4}\left(1+{\frac {i\hbar t}{m\sigma ^{2}}}\right)^{-3/2}\exp \left(-{\frac {r^{2}}{2\sigma ^{2}\left(1+{\frac {i\hbar t}{m\sigma ^{2}}}\right)}}\right).} The peak of the radial probability density 4 π r 2 | Ψ | 2 {\displaystyle 4\pi r^{2}|\Psi |^{2}} can be found at r p = σ 1 + ℏ 2 t 2 m 2 σ 4 . {\displaystyle r_{p}=\sigma {\sqrt {1+{\frac {\hbar ^{2}t^{2}}{m^{2}\sigma ^{4}}}}}.} Now we set the acceleration r ¨ p = ℏ 2 m 2 r p 3 {\displaystyle {\ddot {r}}_{p}={\frac {\hbar ^{2}}{m^{2}r_{p}^{3}}}} of this peak probability equal to the acceleration due to Newtonian gravity: r ¨ = − G m r 2 , {\displaystyle {\ddot {r}}=-{\frac {Gm}{r^{2}}},} using that r p = σ {\displaystyle r_{p}=\sigma } at time t = 0 {\displaystyle t=0} . This yields the relation m 3 σ = ℏ 2 G ≈ 1.7 × 10 − 58 m kg 3 , {\displaystyle m^{3}\sigma ={\frac {\hbar ^{2}}{G}}\approx 1.7\times 10^{-58}~{\text{m}}\,{\text{kg}}^{3},} which allows us to determine a critical width for a given mass value and conversely. We also recognise the scaling law mentioned above. Numerical simulations show that this equation gives a rather good estimate of the mass regime above which effects of the Schrödinger–Newton equation become significant. For an atom the critical width is around 1022 metres, while it is already down to 10−31 metres for a mass of one microgram. The regime where the mass is around 1010 atomic mass units while the width is of the order of micrometers is expected to allow an experimental test of the Schrödinger–Newton equation in the future. A possible candidate are interferometry experiments with heavy molecules, which currently reach masses up to 10000 atomic mass units. == Quantum wave function collapse == The idea that gravity causes (or somehow influences) the wavefunction collapse dates back to the 1960s and was originally proposed by Károlyházy. The Schrödinger–Newton equation was proposed in this context by Diósi. There the equation provides an estimation for the "line of demarcation" between microscopic (quantum) and macroscopic (classical) objects. The stationary ground state has a width of a 0 ≈ ℏ 2 G m 3 . {\displaystyle a_{0}\approx {\frac {\hbar ^{2}}{Gm^{3}}}.} For a well-localised homogeneous sphere, i.e. a sphere with a centre-of-mass wave-function that is narrow compared to the radius of the sphere, Diósi finds as an estimate for the width of the ground-state centre-of-mass wave-function a 0 ( R ) ≈ a 0 1 / 4 R 3 / 4 . {\displaystyle a_{0}^{(R)}\approx a_{0}^{1/4}R^{3/4}.} Assuming a usual density around 1000 kg/m3, a critical radius can be calculated for which a 0 ( R ) ≈ R {\displaystyle a_{0}^{(R)}\approx R} . This critical radius is around a tenth of a micrometer. Roger Penrose proposed that the Schrödinger–Newton equation mathematically describes the basis states involved in a gravitationally induced wavefunction collapse scheme. Penrose suggests that a superposition of two or more quantum states having a significant amount of mass displacement ought to be unstable and reduce to one of the states within a finite time. He hypothesises that there exists a "preferred" set of states that could collapse no further, specifically, the stationary states of the Schrödinger–Newton equation. A macroscopic system can therefore never be in a spatial superposition, since the nonlinear gravitational self-interaction immediately leads to a collapse to a stationary state of the Schrödinger–Newton equation. According to Penrose's idea, when a quantum particle is measured, there is an interplay of this nonlinear collapse and environmental decoherence. The gravitational interaction leads to the reduction of the environment to one distinct state, and decoherence leads to the localisation of the particle, e.g. as a dot on a screen. === Problems and open matters === Three major problems occur with this interpretation of the Schrödinger–Newton equation as the cause of the wave-function collapse: Excessive residual probability far from the collapse point Lack of any apparent reason for the Born rule Promotion of the previously strictly hypothetical wave function to an observable (real) quantity. First, numerical studies agreeingly find that when a wave packet "collapses" to a stationary solution, a small portion of it seems to run away to infinity. This would mean that even a completely collapsed quantum system still can be found at a distant location. Since the solutions of the linear Schrödinger equation tend towards infinity even faster, this only indicates that the Schrödinger–Newton equation alone is not sufficient to explain the wave-function collapse. If the environment is taken into account, this effect might disappear and therefore not be present in the scenario described by Penrose. A second problem, also arising in Penrose's proposal, is the origin of the Born rule: To solve the measurement problem, a mere explanation of why a wave-function collapses – e.g., to a dot on a screen – is not enough. A good model for the collapse process also has to explain why the dot appears on different positions of the screen, with probabilities that are determined by the squared absolute-value of the wave-function. It might be possible that a model based on Penrose's idea could provide such an explanation, but there is as yet no known reason that Born's rule would naturally arise from it. Thirdly, since the gravitational potential is linked to the wave-function in the picture of the Schrödinger–Newton equation, the wave-function must be interpreted as a real object. Therefore, at least in principle, it becomes a measurable quantity. Making use of the nonlocal nature of entangled quantum systems, this could be used to send signals faster than light, which is generally thought to be in contradiction with causality. It is, however, not clear whether this problem can be resolved by applying the right collapse prescription, yet to be found, consistently to the full quantum system. Also, since gravity is such a weak interaction, it is not clear that such an experiment can be actually performed within the parameters given in our universe (see the referenced discussion about a similar thought experiment proposed by Eppley & Hannah). == See also == Nonlinear Schrödinger equation Semiclassical gravity Penrose interpretation Poisson's equation == References ==
Wikipedia/Schrödinger–Newton_equation
De analysi per aequationes numero terminorum infinitas (or On analysis by infinite series, On Analysis by Equations with an infinite number of terms, or On the Analysis by means of equations of an infinite number of terms) is a mathematical work by Isaac Newton. == Creation == Composed in 1669, during the mid-part of that year probably, from ideas Newton had acquired during the period 1665–1666. Newton wrote And whatever the common Analysis performs by Means of Equations of a finite number of Terms (provided that can be done) this new method can always perform the same by means of infinite Equations. So that I have not made any Question of giving this the name of Analysis likewise. For the Reasonings in this are no less certain than in the other, nor the Equations less exact; albeit we Mortals whose reasoning Powers are confined within narrow Limits, can neither express, nor so conceive the Terms of these Equations as to know exactly from thence the Quantities we want. To conclude, we may justly reckon that to belong to the Analytic Art, by the help of which the Areas and Lengths, etc. of Curves may be exactly and geometrically determined. Newton The explication was written to remedy apparent weaknesses in the logarithmic series [infinite series for log ⁡ ( 1 + x ) {\displaystyle \log(1+x)} ] , that had become republished due to Nicolaus Mercator, or through the encouragement of Isaac Barrow in 1669, to ascertain the knowing of the prior authorship of a general method of infinite series. The writing was circulated amongst scholars as a manuscript in 1669, including John Collins a mathematics intelligencer for a group of British and continental mathematicians. His relationship with Newton in the capacity of informant proved instrumental in securing Newton recognition and contact with John Wallis at the Royal Society. Both Cambridge University Press and Royal Society rejected the treatise from publication, being instead published in London in 1711 by William Jones, and again in 1744, as Methodus fluxionum et serierum infinitarum cum eisudem applicatione ad curvarum geometriam in Opuscula mathematica, philosophica et philologica by Marcum-Michaelem Bousquet at that time edited by Johann Castillioneus. == Content == The exponential series, i.e., tending toward infinity, was discovered by Newton and is contained within the Analysis. The treatise contains also the sine series and cosine series and arc series, the logarithmic series and the binomial series. == See also == Newton's method == References == == External links == Text of De analysi (Latin) PDF version
Wikipedia/De_analysi_per_aequationes_numero_terminorum_infinitas
In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux. In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates. In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute". Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter). A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple. == Properties and examples == Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions. A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity). A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean. A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi). Rates are relevant to many aspects of everyday life. For example: How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate. == Rate of change == Consider the case where the numerator f {\displaystyle f} of a rate is a function f ( a ) {\displaystyle f(a)} where a {\displaystyle a} happens to be the denominator of the rate δ f / δ a {\displaystyle \delta f/\delta a} . A rate of change of f {\displaystyle f} with respect to a {\displaystyle a} (where a {\displaystyle a} is incremented by h {\displaystyle h} ) can be formally defined in two ways: Average rate of change = f ( x + h ) − f ( x ) h Instantaneous rate of change = lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle {\begin{aligned}{\mbox{Average rate of change}}&={\frac {f(x+h)-f(x)}{h}}\\{\mbox{Instantaneous rate of change}}&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\end{aligned}}} where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative. For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer. == Temporal rates == In chemistry and physics: Speed, the rate of change of position, or the change of position per unit of time Acceleration, the rate of change in speed, or the change in speed per unit of time Power, the rate of doing work, or the amount of energy transferred per unit time Frequency, the number of occurrences of a repeating event per unit of time Angular frequency and rotation speed, the number of turns per unit of time Reaction rate, the speed at which chemical reactions occur Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second === Counts-per-time rates === Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels In computing: Bit rate, the number of bits that are conveyed or processed by a computer per unit of time Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second Sampling rate, the number of samples (signal measurements) per second Miscellaneous definitions: Rate of reinforcement, number of reinforcements per unit of time, usually per minute Heart rate, usually measured in beats per minute == Economics/finance rates/ratios == Exchange rate, how much one currency is worth in terms of the other Inflation rate, the ratio of the change in the general price level during a year to the starting price level Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed) Price–earnings ratio, market price per share of stock divided by annual earnings per share Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested Tax rate, the tax amount divided by the taxable income Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time) == Other rates == Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time Literacy rate, the proportion of the population over age fifteen that can read and write Sex ratio or gender ratio, the ratio of males to females in a population == See also == Derivative Gradient Hertz Slope == References ==
Wikipedia/Rate_of_change_(mathematics)
Statistical Physics of Particles and Statistical Physics of Fields are a two-volume series of textbooks by Mehran Kardar. Each book is based on a semester-long course taught by Kardar at the Massachusetts Institute of Technology. They cover statistical physics and thermodynamics at the graduate level. == Editions == Kardar, Mehran (2007). Statistical Physics of Particles. Cambridge University Press. ISBN 978-0-521-87342-0. OCLC 860391091. Kardar, Mehran (2007). Statistical Physics of Fields. Cambridge University Press. ISBN 978-0-521-87341-3. OCLC 920137477. == External links == Statistical Mechanics I at MIT OpenCourseWare Statistical Mechanics II at MIT OpenCourseWare Publisher's website for Particles Publisher's website for Fields == References ==
Wikipedia/Statistical_Physics_of_Particles
In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space R n {\displaystyle \mathbb {R} ^{n}} as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra (or some functional analysis) more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces. Calculus on Euclidean space is also a local model of calculus on manifolds, a theory of functions on manifolds. == Basic notions == === Functions in one real variable === This section is a brief review of function theory in one-variable calculus. A real-valued function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is continuous at a {\displaystyle a} if it is approximately constant near a {\displaystyle a} ; i.e., lim h → 0 ( f ( a + h ) − f ( a ) ) = 0. {\displaystyle \lim _{h\to 0}(f(a+h)-f(a))=0.} In contrast, the function f {\displaystyle f} is differentiable at a {\displaystyle a} if it is approximately linear near a {\displaystyle a} ; i.e., there is some real number λ {\displaystyle \lambda } such that lim h → 0 f ( a + h ) − f ( a ) − λ h h = 0. {\displaystyle \lim _{h\to 0}{\frac {f(a+h)-f(a)-\lambda h}{h}}=0.} (For simplicity, suppose f ( a ) = 0 {\displaystyle f(a)=0} . Then the above means that f ( a + h ) = λ h + g ( a , h ) {\displaystyle f(a+h)=\lambda h+g(a,h)} where g ( a , h ) {\displaystyle g(a,h)} goes to 0 faster than h going to 0 and, in that sense, f ( a + h ) {\displaystyle f(a+h)} behaves like λ h {\displaystyle \lambda h} .) The number λ {\displaystyle \lambda } depends on a {\displaystyle a} and thus is denoted as f ′ ( a ) {\displaystyle f'(a)} . If f {\displaystyle f} is differentiable on an open interval U {\displaystyle U} and if f ′ {\displaystyle f'} is a continuous function on U {\displaystyle U} , then f {\displaystyle f} is called a C1 function. More generally, f {\displaystyle f} is called a Ck function if its derivative f ′ {\displaystyle f'} is Ck-1 function. Taylor's theorem states that a Ck function is precisely a function that can be approximated by a polynomial of degree k. If f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a C1 function and f ′ ( a ) ≠ 0 {\displaystyle f'(a)\neq 0} for some a {\displaystyle a} , then either f ′ ( a ) > 0 {\displaystyle f'(a)>0} or f ′ ( a ) < 0 {\displaystyle f'(a)<0} ; i.e., either f {\displaystyle f} is strictly increasing or strictly decreasing in some open interval containing a. In particular, f : f − 1 ( U ) → U {\displaystyle f:f^{-1}(U)\to U} is bijective for some open interval U {\displaystyle U} containing f ( a ) {\displaystyle f(a)} . The inverse function theorem then says that the inverse function f − 1 {\displaystyle f^{-1}} is differentiable on U with the derivatives: for y ∈ U {\displaystyle y\in U} ( f − 1 ) ′ ( y ) = 1 f ′ ( f − 1 ( y ) ) . {\displaystyle (f^{-1})'(y)={1 \over f'(f^{-1}(y))}.} === Derivative of a map and chain rule === For functions f {\displaystyle f} defined in the plane or more generally on an Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , it is necessary to consider functions that are vector-valued or matrix-valued. It is also conceptually helpful to do this in an invariant manner (i.e., a coordinate-free way). Derivatives of such maps at a point are then vectors or linear maps, not real numbers. Let f : X → Y {\displaystyle f:X\to Y} be a map from an open subset X {\displaystyle X} of R n {\displaystyle \mathbb {R} ^{n}} to an open subset Y {\displaystyle Y} of R m {\displaystyle \mathbb {R} ^{m}} . Then the map f {\displaystyle f} is said to be differentiable at a point x {\displaystyle x} in X {\displaystyle X} if there exists a (necessarily unique) linear transformation f ′ ( x ) : R n → R m {\displaystyle f'(x):\mathbb {R} ^{n}\to \mathbb {R} ^{m}} , called the derivative of f {\displaystyle f} at x {\displaystyle x} , such that lim h → 0 1 | h | | f ( x + h ) − f ( x ) − f ′ ( x ) h | = 0 {\displaystyle \lim _{h\to 0}{\frac {1}{|h|}}|f(x+h)-f(x)-f'(x)h|=0} where f ′ ( x ) h {\displaystyle f'(x)h} is the application of the linear transformation f ′ ( x ) {\displaystyle f'(x)} to h {\displaystyle h} . If f {\displaystyle f} is differentiable at x {\displaystyle x} , then it is continuous at x {\displaystyle x} since | f ( x + h ) − f ( x ) | ≤ ( | h | − 1 | f ( x + h ) − f ( x ) − f ′ ( x ) h | ) | h | + | f ′ ( x ) h | → 0 {\displaystyle |f(x+h)-f(x)|\leq (|h|^{-1}|f(x+h)-f(x)-f'(x)h|)|h|+|f'(x)h|\to 0} as h → 0 {\displaystyle h\to 0} . As in the one-variable case, there is This is proved exactly as for functions in one variable. Indeed, with the notation h ~ = f ( x + h ) − f ( x ) {\displaystyle {\widetilde {h}}=f(x+h)-f(x)} , we have: 1 | h | | g ( f ( x + h ) ) − g ( y ) − g ′ ( y ) f ′ ( x ) h | ≤ 1 | h | | g ( y + h ~ ) − g ( y ) − g ′ ( y ) h ~ | + 1 | h | | g ′ ( y ) ( f ( x + h ) − f ( x ) − f ′ ( x ) h ) | . {\displaystyle {\begin{aligned}&{\frac {1}{|h|}}|g(f(x+h))-g(y)-g'(y)f'(x)h|\\&\leq {\frac {1}{|h|}}|g(y+{\widetilde {h}})-g(y)-g'(y){\widetilde {h}}|+{\frac {1}{|h|}}|g'(y)(f(x+h)-f(x)-f'(x)h)|.\end{aligned}}} Here, since f {\displaystyle f} is differentiable at x {\displaystyle x} , the second term on the right goes to zero as h → 0 {\displaystyle h\to 0} . As for the first term, it can be written as: { | h ~ | | h | | g ( y + h ~ ) − g ( y ) − g ′ ( y ) h ~ | / | h ~ | , h ~ ≠ 0 , 0 , h ~ = 0. {\displaystyle {\begin{cases}{\frac {|{\widetilde {h}}|}{|h|}}|g(y+{\widetilde {h}})-g(y)-g'(y){\widetilde {h}}|/|{\widetilde {h}}|,&{\widetilde {h}}\neq 0,\\0,&{\widetilde {h}}=0.\end{cases}}} Now, by the argument showing the continuity of f {\displaystyle f} at x {\displaystyle x} , we see | h ~ | | h | {\displaystyle {\frac {|{\widetilde {h}}|}{|h|}}} is bounded. Also, h ~ → 0 {\displaystyle {\widetilde {h}}\to 0} as h → 0 {\displaystyle h\to 0} since f {\displaystyle f} is continuous at x {\displaystyle x} . Hence, the first term also goes to zero as h → 0 {\displaystyle h\to 0} by the differentiability of g {\displaystyle g} at y {\displaystyle y} . ◻ {\displaystyle \square } The map f {\displaystyle f} as above is called continuously differentiable or C 1 {\displaystyle C^{1}} if it is differentiable on the domain and also the derivatives vary continuously; i.e., x ↦ f ′ ( x ) {\displaystyle x\mapsto f'(x)} is continuous. As a linear transformation, f ′ ( x ) {\displaystyle f'(x)} is represented by an m × n {\displaystyle m\times n} -matrix, called the Jacobian matrix J f ( x ) {\displaystyle Jf(x)} of f {\displaystyle f} at x {\displaystyle x} and we write it as: ( J f ) ( x ) = [ ∂ f 1 ∂ x 1 ( x ) ⋯ ∂ f 1 ∂ x n ( x ) ⋮ ⋱ ⋮ ∂ f m ∂ x 1 ( x ) ⋯ ∂ f m ∂ x n ( x ) ] . {\displaystyle (Jf)(x)={\begin{bmatrix}{\frac {\partial f_{1}}{\partial x_{1}}}(x)&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}(x)\\\vdots &\ddots &\vdots \\{\frac {\partial f_{m}}{\partial x_{1}}}(x)&\cdots &{\frac {\partial f_{m}}{\partial x_{n}}}(x)\end{bmatrix}}.} Taking h {\displaystyle h} to be h e j {\displaystyle he_{j}} , h {\displaystyle h} a real number and e j = ( 0 , ⋯ , 1 , ⋯ , 0 ) {\displaystyle e_{j}=(0,\cdots ,1,\cdots ,0)} the j-th standard basis element, we see that the differentiability of f {\displaystyle f} at x {\displaystyle x} implies: lim h → 0 f i ( x + h e j ) − f i ( x ) h = ∂ f i ∂ x j ( x ) {\displaystyle \lim _{h\to 0}{\frac {f_{i}(x+he_{j})-f_{i}(x)}{h}}={\frac {\partial f_{i}}{\partial x_{j}}}(x)} where f i {\displaystyle f_{i}} denotes the i-th component of f {\displaystyle f} . That is, each component of f {\displaystyle f} is differentiable at x {\displaystyle x} in each variable with the derivative ∂ f i ∂ x j ( x ) {\displaystyle {\frac {\partial f_{i}}{\partial x_{j}}}(x)} . In terms of Jacobian matrices, the chain rule says J ( g ∘ f ) ( x ) = J g ( y ) J f ( x ) {\displaystyle J(g\circ f)(x)=Jg(y)Jf(x)} ; i.e., as ( g ∘ f ) i = g i ∘ f {\displaystyle (g\circ f)_{i}=g_{i}\circ f} , ∂ ( g i ∘ f ) ∂ x j ( x ) = ∂ g i ∂ y 1 ( y ) ∂ f 1 ∂ x j ( x ) + ⋯ + ∂ g i ∂ y m ( y ) ∂ f m ∂ x j ( x ) , {\displaystyle {\frac {\partial (g_{i}\circ f)}{\partial x_{j}}}(x)={\frac {\partial g_{i}}{\partial y_{1}}}(y){\frac {\partial f_{1}}{\partial x_{j}}}(x)+\cdots +{\frac {\partial g_{i}}{\partial y_{m}}}(y){\frac {\partial f_{m}}{\partial x_{j}}}(x),} which is the form of the chain rule that is often stated. A partial converse to the above holds. Namely, if the partial derivatives ∂ f i / ∂ x j {\displaystyle {\partial f_{i}}/{\partial x_{j}}} are all defined and continuous, then f {\displaystyle f} is continuously differentiable. This is a consequence of the mean value inequality: (This version of mean value inequality follows from mean value inequality in Mean value theorem § Mean value theorem for vector-valued functions applied to the function [ 0 , 1 ] → R m , t ↦ f ( x + t y ) − t v {\displaystyle [0,1]\to \mathbb {R} ^{m},\,t\mapsto f(x+ty)-tv} , where the proof on mean value inequality is given.) Indeed, let g ( x ) = ( J f ) ( x ) {\displaystyle g(x)=(Jf)(x)} . We note that, if y = y i e i {\displaystyle y=y_{i}e_{i}} , then d d t f ( x + t y ) = ∂ f ∂ x i ( x + t y ) y = g ( x + t y ) ( y i e i ) . {\displaystyle {\frac {d}{dt}}f(x+ty)={\frac {\partial f}{\partial x_{i}}}(x+ty)y=g(x+ty)(y_{i}e_{i}).} For simplicity, assume n = 2 {\displaystyle n=2} (the argument for the general case is similar). Then, by mean value inequality, with the operator norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} , | Δ y f ( x ) − g ( x ) y | ≤ | Δ y 1 e 1 f ( x 1 , x 2 + y 2 ) − g ( x ) ( y 1 e 1 ) | + | Δ y 2 e 2 f ( x 1 , x 2 ) − g ( x ) ( y 2 e 2 ) | ≤ | y 1 | sup 0 < t < 1 ‖ g ( x 1 + t y 1 , x 2 + y 2 ) − g ( x ) ‖ + | y 2 | sup 0 < t < 1 ‖ g ( x 1 , x 2 + t y 2 ) − g ( x ) ‖ , {\displaystyle {\begin{aligned}&|\Delta _{y}f(x)-g(x)y|\\&\leq |\Delta _{y_{1}e_{1}}f(x_{1},x_{2}+y_{2})-g(x)(y_{1}e_{1})|+|\Delta _{y_{2}e_{2}}f(x_{1},x_{2})-g(x)(y_{2}e_{2})|\\&\leq |y_{1}|\sup _{0<t<1}\|g(x_{1}+ty_{1},x_{2}+y_{2})-g(x)\|+|y_{2}|\sup _{0<t<1}\|g(x_{1},x_{2}+ty_{2})-g(x)\|,\end{aligned}}} which implies | Δ y f ( x ) − g ( x ) y | / | y | → 0 {\displaystyle |\Delta _{y}f(x)-g(x)y|/|y|\to 0} as required. ◻ {\displaystyle \square } Example: Let U {\displaystyle U} be the set of all invertible real square matrices of size n. Note U {\displaystyle U} can be identified as an open subset of R n 2 {\displaystyle \mathbb {R} ^{n^{2}}} with coordinates x i j , 0 ≤ i , j ≠ n {\displaystyle x_{ij},0\leq i,j\neq n} . Consider the function f ( g ) = g − 1 {\displaystyle f(g)=g^{-1}} = the inverse matrix of g {\displaystyle g} defined on U {\displaystyle U} . To guess its derivatives, assume f {\displaystyle f} is differentiable and consider the curve c ( t ) = g e t g − 1 h {\displaystyle c(t)=ge^{tg^{-1}h}} where e A {\displaystyle e^{A}} means the matrix exponential of A {\displaystyle A} . By the chain rule applied to f ( c ( t ) ) = e − t g − 1 h g − 1 {\displaystyle f(c(t))=e^{-tg^{-1}h}g^{-1}} , we have: f ′ ( c ( t ) ) ∘ c ′ ( t ) = − g − 1 h e − t g − 1 h g − 1 {\displaystyle f'(c(t))\circ c'(t)=-g^{-1}he^{-tg^{-1}h}g^{-1}} . Taking t = 0 {\displaystyle t=0} , we get: f ′ ( g ) h = − g − 1 h g − 1 {\displaystyle f'(g)h=-g^{-1}hg^{-1}} . Now, we then have: ‖ ( g + h ) − 1 − g − 1 + g − 1 h g − 1 ‖ ≤ ‖ ( g + h ) − 1 ‖ ‖ h ‖ ‖ g − 1 h g − 1 ‖ . {\displaystyle \|(g+h)^{-1}-g^{-1}+g^{-1}hg^{-1}\|\leq \|(g+h)^{-1}\|\|h\|\|g^{-1}hg^{-1}\|.} Since the operator norm is equivalent to the Euclidean norm on R n 2 {\displaystyle \mathbb {R} ^{n^{2}}} (any norms are equivalent to each other), this implies f {\displaystyle f} is differentiable. Finally, from the formula for f ′ {\displaystyle f'} , we see the partial derivatives of f {\displaystyle f} are smooth (infinitely differentiable); whence, f {\displaystyle f} is smooth too. === Higher derivatives and Taylor formula === If f : X → R m {\displaystyle f:X\to \mathbb {R} ^{m}} is differentiable where X ⊂ R n {\displaystyle X\subset \mathbb {R} ^{n}} is an open subset, then the derivatives determine the map f ′ : X → Hom ⁡ ( R n , R m ) {\displaystyle f':X\to \operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m})} , where Hom {\displaystyle \operatorname {Hom} } stands for homomorphisms between vector spaces; i.e., linear maps. If f ′ {\displaystyle f'} is differentiable, then f ″ : X → Hom ⁡ ( R n , Hom ⁡ ( R n , R m ) ) {\displaystyle f'':X\to \operatorname {Hom} (\mathbb {R} ^{n},\operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m}))} . Here, the codomain of f ″ {\displaystyle f''} can be identified with the space of bilinear maps by: Hom ⁡ ( R n , Hom ⁡ ( R n , R m ) ) → ∼ φ { ( R n ) 2 → R m bilinear } {\displaystyle \operatorname {Hom} (\mathbb {R} ^{n},\operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m})){\overset {\varphi }{\underset {\sim }{\to }}}\{(\mathbb {R} ^{n})^{2}\to \mathbb {R} ^{m}{\text{ bilinear}}\}} where φ ( g ) ( x , y ) = g ( x ) y {\displaystyle \varphi (g)(x,y)=g(x)y} and φ {\displaystyle \varphi } is bijective with the inverse ψ {\displaystyle \psi } given by ( ψ ( g ) x ) y = g ( x , y ) {\displaystyle (\psi (g)x)y=g(x,y)} . In general, f ( k ) = ( f ( k − 1 ) ) ′ {\displaystyle f^{(k)}=(f^{(k-1)})'} is a map from X {\displaystyle X} to the space of k {\displaystyle k} -multilinear maps ( R n ) k → R m {\displaystyle (\mathbb {R} ^{n})^{k}\to \mathbb {R} ^{m}} . Just as f ′ ( x ) {\displaystyle f'(x)} is represented by a matrix (Jacobian matrix), when m = 1 {\displaystyle m=1} (a bilinear map is a bilinear form), the bilinear form f ″ ( x ) {\displaystyle f''(x)} is represented by a matrix called the Hessian matrix of f {\displaystyle f} at x {\displaystyle x} ; namely, the square matrix H {\displaystyle H} of size n {\displaystyle n} such that f ″ ( x ) ( y , z ) = ( H y , z ) {\displaystyle f''(x)(y,z)=(Hy,z)} , where the paring refers to an inner product of R n {\displaystyle \mathbb {R} ^{n}} , and H {\displaystyle H} is none other than the Jacobian matrix of f ′ : X → ( R n ) ∗ ≃ R n {\displaystyle f':X\to (\mathbb {R} ^{n})^{*}\simeq \mathbb {R} ^{n}} . The ( i , j ) {\displaystyle (i,j)} -th entry of H {\displaystyle H} is thus given explicitly as H i j = ∂ 2 f ∂ x i ∂ x j ( x ) {\displaystyle H_{ij}={\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}(x)} . Moreover, if f ″ {\displaystyle f''} exists and is continuous, then the matrix H {\displaystyle H} is symmetric, the fact known as the symmetry of second derivatives. This is seen using the mean value inequality. For vectors u , v {\displaystyle u,v} in R n {\displaystyle \mathbb {R} ^{n}} , using mean value inequality twice, we have: | Δ v Δ u f ( x ) − f ″ ( x ) ( u , v ) | ≤ sup 0 < t 1 , t 2 < 1 | f ″ ( x + t 1 u + t 2 v ) ( u , v ) − f ″ ( x ) ( u , v ) | , {\displaystyle |\Delta _{v}\Delta _{u}f(x)-f''(x)(u,v)|\leq \sup _{0<t_{1},t_{2}<1}|f''(x+t_{1}u+t_{2}v)(u,v)-f''(x)(u,v)|,} which says f ″ ( x ) ( u , v ) = lim s , t → 0 ( Δ t v Δ s u f ( x ) − f ( x ) ) / ( s t ) . {\displaystyle f''(x)(u,v)=\lim _{s,t\to 0}(\Delta _{tv}\Delta _{su}f(x)-f(x))/(st).} Since the right-hand side is symmetric in u , v {\displaystyle u,v} , so is the left-hand side: f ″ ( x ) ( u , v ) = f ″ ( x ) ( v , u ) {\displaystyle f''(x)(u,v)=f''(x)(v,u)} . By induction, if f {\displaystyle f} is C k {\displaystyle C^{k}} , then the k-multilinear map f ( k ) ( x ) {\displaystyle f^{(k)}(x)} is symmetric; i.e., the order of taking partial derivatives does not matter. As in the case of one variable, the Taylor series expansion can then be proved by integration by parts: f ( z + ( h , k ) ) = ∑ a + b < n ∂ x a ∂ y b f ( z ) h a k b a ! b ! + n ∫ 0 1 ( 1 − t ) n − 1 ∑ a + b = n ∂ x a ∂ y b f ( z + t ( h , k ) ) h a k b a ! b ! d t . {\displaystyle f(z+(h,k))=\sum _{a+b<n}\partial _{x}^{a}\partial _{y}^{b}f(z){h^{a}k^{b} \over a!b!}+n\int _{0}^{1}(1-t)^{n-1}\sum _{a+b=n}\partial _{x}^{a}\partial _{y}^{b}f(z+t(h,k)){h^{a}k^{b} \over a!b!}\,dt.} Taylor's formula has an effect of dividing a function by variables, which can be illustrated by the next typical theoretical use of the formula. Example: Let T : S → S {\displaystyle T:{\mathcal {S}}\to {\mathcal {S}}} be a linear map between the vector space S {\displaystyle {\mathcal {S}}} of smooth functions on R n {\displaystyle \mathbb {R} ^{n}} with rapidly decreasing derivatives; i.e., sup | x β ∂ α φ | < ∞ {\displaystyle \sup |x^{\beta }\partial ^{\alpha }\varphi |<\infty } for any multi-index α , β {\displaystyle \alpha ,\beta } . (The space S {\displaystyle {\mathcal {S}}} is called a Schwartz space.) For each φ {\displaystyle \varphi } in S {\displaystyle {\mathcal {S}}} , Taylor's formula implies we can write: φ − ψ φ ( y ) = ∑ j = 1 n ( x j − y j ) φ j {\displaystyle \varphi -\psi \varphi (y)=\sum _{j=1}^{n}(x_{j}-y_{j})\varphi _{j}} with φ j ∈ S {\displaystyle \varphi _{j}\in {\mathcal {S}}} , where ψ {\displaystyle \psi } is a smooth function with compact support and ψ ( y ) = 1 {\displaystyle \psi (y)=1} . Now, assume T {\displaystyle T} commutes with coordinates; i.e., T ( x j φ ) = x j T φ {\displaystyle T(x_{j}\varphi )=x_{j}T\varphi } . Then T φ − φ ( y ) T ψ = ∑ j = 1 n ( x j − y j ) T φ j {\displaystyle T\varphi -\varphi (y)T\psi =\sum _{j=1}^{n}(x_{j}-y_{j})T\varphi _{j}} . Evaluating the above at y {\displaystyle y} , we get T φ ( y ) = φ ( y ) T ψ ( y ) . {\displaystyle T\varphi (y)=\varphi (y)T\psi (y).} In other words, T {\displaystyle T} is a multiplication by some function m {\displaystyle m} ; i.e., T φ = m φ {\displaystyle T\varphi =m\varphi } . Now, assume further that T {\displaystyle T} commutes with partial differentiations. We then easily see that m {\displaystyle m} is a constant; T {\displaystyle T} is a multiplication by a constant. (Aside: the above discussion almost proves the Fourier inversion formula. Indeed, let F , R : S → S {\displaystyle F,R:{\mathcal {S}}\to {\mathcal {S}}} be the Fourier transform and the reflection; i.e., ( R φ ) ( x ) = φ ( − x ) {\displaystyle (R\varphi )(x)=\varphi (-x)} . Then, dealing directly with the integral that is involved, one can see T = R F 2 {\displaystyle T=RF^{2}} commutes with coordinates and partial differentiations; hence, T {\displaystyle T} is a multiplication by a constant. This is almost a proof since one still has to compute this constant.) A partial converse to the Taylor formula also holds; see Borel's lemma and Whitney extension theorem. === Inverse function theorem and submersion theorem === A C k {\displaystyle C^{k}} -map with the C k {\displaystyle C^{k}} -inverse is called a C k {\displaystyle C^{k}} -diffeomorphism. Thus, the theorem says that, for a map f {\displaystyle f} satisfying the hypothesis at a point x {\displaystyle x} , f {\displaystyle f} is a diffeomorphism near x , f ( x ) . {\displaystyle x,f(x).} For a proof, see Inverse function theorem § A proof using successive approximation. The implicit function theorem says: given a map f : R n × R m → R m {\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{m}} , if f ( a , b ) = 0 {\displaystyle f(a,b)=0} , f {\displaystyle f} is C k {\displaystyle C^{k}} in a neighborhood of ( a , b ) {\displaystyle (a,b)} and the derivative of y ↦ f ( a , y ) {\displaystyle y\mapsto f(a,y)} at b {\displaystyle b} is invertible, then there exists a differentiable map g : U → V {\displaystyle g:U\to V} for some neighborhoods U , V {\displaystyle U,V} of a , b {\displaystyle a,b} such that f ( x , g ( x ) ) = 0 {\displaystyle f(x,g(x))=0} . The theorem follows from the inverse function theorem; see Inverse function theorem § Implicit function theorem. Another consequence is the submersion theorem. === Integrable functions on Euclidean spaces === A partition of an interval [ a , b ] {\displaystyle [a,b]} is a finite sequence a = t 0 ≤ t 1 ≤ ⋯ ≤ t k = b {\displaystyle a=t_{0}\leq t_{1}\leq \cdots \leq t_{k}=b} . A partition P {\displaystyle P} of a rectangle D {\displaystyle D} (product of intervals) in R n {\displaystyle \mathbb {R} ^{n}} then consists of partitions of the sides of D {\displaystyle D} ; i.e., if D = ∏ 1 n [ a i , b i ] {\displaystyle D=\prod _{1}^{n}[a_{i},b_{i}]} , then P {\displaystyle P} consists of P 1 , … , P n {\displaystyle P_{1},\dots ,P_{n}} such that P i {\displaystyle P_{i}} is a partition of [ a i , b i ] {\displaystyle [a_{i},b_{i}]} . Given a function f {\displaystyle f} on D {\displaystyle D} , we then define the upper Riemann sum of it as: U ( f , P ) = ∑ Q ∈ P ( sup Q f ) vol ⁡ ( Q ) {\displaystyle U(f,P)=\sum _{Q\in P}(\sup _{Q}f)\operatorname {vol} (Q)} where Q {\displaystyle Q} is a partition element of P {\displaystyle P} ; i.e., Q = ∏ i = 1 n [ t i , j i , t i , j i + 1 ] {\displaystyle Q=\prod _{i=1}^{n}[t_{i,j_{i}},t_{i,j_{i}+1}]} when P i : a i = t i , 0 ≤ … ⋯ ≤ t i , k i = b i {\displaystyle P_{i}:a_{i}=t_{i,0}\leq \dots \cdots \leq t_{i,k_{i}}=b_{i}} is a partition of [ a i , b i ] {\displaystyle [a_{i},b_{i}]} . The volume vol ⁡ ( Q ) {\displaystyle \operatorname {vol} (Q)} of Q {\displaystyle Q} is the usual Euclidean volume; i.e., vol ⁡ ( Q ) = ∏ 1 n ( t i , j i + 1 − t i , j i ) {\displaystyle \operatorname {vol} (Q)=\prod _{1}^{n}(t_{i,j_{i}+1}-t_{i,j_{i}})} . The lower Riemann sum L ( f , P ) {\displaystyle L(f,P)} of f {\displaystyle f} is then defined by replacing sup {\displaystyle \sup } by inf {\displaystyle \inf } . Finally, the function f {\displaystyle f} is called integrable if it is bounded and sup { L ( f , P ) ∣ P } = inf { U ( f , P ) ∣ P } {\displaystyle \sup\{L(f,P)\mid P\}=\inf\{U(f,P)\mid P\}} . In that case, the common value is denoted as ∫ D f d x {\displaystyle \int _{D}f\,dx} . A subset of R n {\displaystyle \mathbb {R} ^{n}} is said to have measure zero if for each ϵ > 0 {\displaystyle \epsilon >0} , there are some possibly infinitely many rectangles D 1 , D 2 , … , {\displaystyle D_{1},D_{2},\dots ,} whose union contains the set and ∑ i vol ⁡ ( D i ) < ϵ . {\displaystyle \sum _{i}\operatorname {vol} (D_{i})<\epsilon .} A key theorem is The next theorem allows us to compute the integral of a function as the iteration of the integrals of the function in one-variables: In particular, the order of integrations can be changed. Finally, if M ⊂ R n {\displaystyle M\subset \mathbb {R} ^{n}} is a bounded open subset and f {\displaystyle f} a function on M {\displaystyle M} , then we define ∫ M f d x := ∫ D χ M f d x {\displaystyle \int _{M}f\,dx:=\int _{D}\chi _{M}f\,dx} where D {\displaystyle D} is a closed rectangle containing M {\displaystyle M} and χ M {\displaystyle \chi _{M}} is the characteristic function on M {\displaystyle M} ; i.e., χ M ( x ) = 1 {\displaystyle \chi _{M}(x)=1} if x ∈ M {\displaystyle x\in M} and = 0 {\displaystyle =0} if x ∉ M , {\displaystyle x\not \in M,} provided χ M f {\displaystyle \chi _{M}f} is integrable. === Surface integral === If a bounded surface M {\displaystyle M} in R 3 {\displaystyle \mathbb {R} ^{3}} is parametrized by r = r ( u , v ) {\displaystyle {\textbf {r}}={\textbf {r}}(u,v)} with domain D {\displaystyle D} , then the surface integral of a measurable function F {\displaystyle F} on M {\displaystyle M} is defined and denoted as: ∫ M F d S := ∫ ∫ D ( F ∘ r ) | r u × r v | d u d v {\displaystyle \int _{M}F\,dS:=\int \int _{D}(F\circ {\textbf {r}})|{\textbf {r}}_{u}\times {\textbf {r}}_{v}|\,dudv} If F : M → R 3 {\displaystyle F:M\to \mathbb {R} ^{3}} is vector-valued, then we define ∫ M F ⋅ d S := ∫ M ( F ⋅ n ) d S {\displaystyle \int _{M}F\cdot dS:=\int _{M}(F\cdot {\textbf {n}})\,dS} where n {\displaystyle {\textbf {n}}} is an outward unit normal vector to M {\displaystyle M} . Since n = r u × r v | r u × r v | {\displaystyle {\textbf {n}}={\frac {{\textbf {r}}_{u}\times {\textbf {r}}_{v}}{|{\textbf {r}}_{u}\times {\textbf {r}}_{v}|}}} , we have: ∫ M F ⋅ d S = ∫ ∫ D ( F ∘ r ) ⋅ ( r u × r v ) d u d v = ∫ ∫ D det ( F ∘ r , r u , r v ) d u d v . {\displaystyle \int _{M}F\cdot dS=\int \int _{D}(F\circ {\textbf {r}})\cdot ({\textbf {r}}_{u}\times {\textbf {r}}_{v})\,dudv=\int \int _{D}\det(F\circ {\textbf {r}},{\textbf {r}}_{u},{\textbf {r}}_{v})\,dudv.} == Vector analysis == === Tangent vectors and vector fields === Let c : [ 0 , 1 ] → R n {\displaystyle c:[0,1]\to \mathbb {R} ^{n}} be a differentiable curve. Then the tangent vector to the curve c {\displaystyle c} at t {\displaystyle t} is a vector v {\displaystyle v} at the point c ( t ) {\displaystyle c(t)} whose components are given as: v = ( c 1 ′ ( t ) , … , c n ′ ( t ) ) {\displaystyle v=(c_{1}'(t),\dots ,c_{n}'(t))} . For example, if c ( t ) = ( a cos ⁡ ( t ) , a sin ⁡ ( t ) , b t ) , a > 0 , b > 0 {\displaystyle c(t)=(a\cos(t),a\sin(t),bt),a>0,b>0} is a helix, then the tangent vector at t is: c ′ ( t ) = ( − a sin ⁡ ( t ) , a cos ⁡ ( t ) , b ) . {\displaystyle c'(t)=(-a\sin(t),a\cos(t),b).} It corresponds to the intuition that the a point on the helix moves up in a constant speed. If M ⊂ R n {\displaystyle M\subset \mathbb {R} ^{n}} is a differentiable curve or surface, then the tangent space to M {\displaystyle M} at a point p is the set of all tangent vectors to the differentiable curves c : [ 0 , 1 ] → M {\displaystyle c:[0,1]\to M} with c ( 0 ) = p {\displaystyle c(0)=p} . A vector field X is an assignment to each point p in M a tangent vector X p {\displaystyle X_{p}} to M at p such that the assignment varies smoothly. === Differential forms === The dual notion of a vector field is a differential form. Given an open subset M {\displaystyle M} in R n {\displaystyle \mathbb {R} ^{n}} , by definition, a differential 1-form (often just 1-form) ω {\displaystyle \omega } is an assignment to a point p {\displaystyle p} in M {\displaystyle M} a linear functional ω p {\displaystyle \omega _{p}} on the tangent space T p M {\displaystyle T_{p}M} to M {\displaystyle M} at p {\displaystyle p} such that the assignment varies smoothly. For a (real or complex-valued) smooth function f {\displaystyle f} , define the 1-form d f {\displaystyle df} by: for a tangent vector v {\displaystyle v} at p {\displaystyle p} , d f p ( v ) = v ( f ) {\displaystyle df_{p}(v)=v(f)} where v ( f ) {\displaystyle v(f)} denotes the directional derivative of f {\displaystyle f} in the direction v {\displaystyle v} at p {\displaystyle p} . For example, if x i {\displaystyle x_{i}} is the i {\displaystyle i} -th coordinate function, then d x i , p ( v ) = v i {\displaystyle dx_{i,p}(v)=v_{i}} ; i.e., d x i , p {\displaystyle dx_{i,p}} are the dual basis to the standard basis on T p M {\displaystyle T_{p}M} . Then every differential 1-form ω {\displaystyle \omega } can be written uniquely as ω = f 1 d x 1 + ⋯ + f n d x n {\displaystyle \omega =f_{1}\,dx_{1}+\cdots +f_{n}\,dx_{n}} for some smooth functions f 1 , … , f n {\displaystyle f_{1},\dots ,f_{n}} on M {\displaystyle M} (since, for every point p {\displaystyle p} , the linear functional ω p {\displaystyle \omega _{p}} is a unique linear combination of d x i {\displaystyle dx_{i}} over real numbers). More generally, a differential k-form is an assignment to a point p {\displaystyle p} in M {\displaystyle M} a vector ω p {\displaystyle \omega _{p}} in the k {\displaystyle k} -th exterior power ⋀ k T p ∗ M {\displaystyle \bigwedge ^{k}T_{p}^{*}M} of the dual space T p ∗ M {\displaystyle T_{p}^{*}M} of T p M {\displaystyle T_{p}M} such that the assignment varies smoothly. In particular, a 0-form is the same as a smooth function. Also, any k {\displaystyle k} -form ω {\displaystyle \omega } can be written uniquely as: ω = ∑ i 1 < ⋯ < i k f i 1 … i k d x i 1 ∧ ⋯ ∧ d x i k {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}f_{i_{1}\dots i_{k}}\,dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k}}} for some smooth functions f i 1 … i k {\displaystyle f_{i_{1}\dots i_{k}}} . Like a smooth function, we can differentiate and integrate differential forms. If f {\displaystyle f} is a smooth function, then d f {\displaystyle df} can be written as: d f = ∑ i = 1 n ∂ f ∂ x i d x i {\displaystyle df=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}\,dx_{i}} since, for v = ∂ / ∂ x j | p {\displaystyle v=\partial /\partial x_{j}|_{p}} , we have: d f p ( v ) = ∂ f ∂ x j ( p ) = ∑ i = 1 n ∂ f ∂ x i ( p ) d x i ( v ) {\displaystyle df_{p}(v)={\frac {\partial f}{\partial x_{j}}}(p)=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}(p)\,dx_{i}(v)} . Note that, in the above expression, the left-hand side (whence the right-hand side) is independent of coordinates x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} ; this property is called the invariance of differential. The operation d {\displaystyle d} is called the exterior derivative and it extends to any differential forms inductively by the requirement (Leibniz rule) d ( α ∧ β ) = d α ∧ β + ( − 1 ) p α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{p}\alpha \wedge d\beta .} where α , β {\displaystyle \alpha ,\beta } are a p-form and a q-form. The exterior derivative has the important property that d ∘ d = 0 {\displaystyle d\circ d=0} ; that is, the exterior derivative d {\displaystyle d} of a differential form d ω {\displaystyle d\omega } is zero. This property is a consequence of the symmetry of second derivatives (mixed partials are equal). === Boundary and orientation === A circle can be oriented clockwise or counterclockwise. Mathematically, we say that a subset M {\displaystyle M} of R n {\displaystyle \mathbb {R} ^{n}} is oriented if there is a consistent choice of normal vectors to M {\displaystyle M} that varies continuously. For example, a circle or, more generally, an n-sphere can be oriented; i.e., orientable. On the other hand, a Möbius strip (a surface obtained by identified by two opposite sides of the rectangle in a twisted way) cannot oriented: if we start with a normal vector and travel around the strip, the normal vector at end will point to the opposite direction. The proposition is useful because it allows us to give an orientation by giving a volume form. === Integration of differential forms === If ω = f d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =f\,dx_{1}\wedge \cdots \wedge dx_{n}} is a differential n-form on an open subset M in R n {\displaystyle \mathbb {R} ^{n}} (any n-form is that form), then the integration of it over M {\displaystyle M} with the standard orientation is defined as: ∫ M ω = ∫ M f d x 1 ⋯ d x n . {\displaystyle \int _{M}\omega =\int _{M}f\,dx_{1}\cdots dx_{n}.} If M is given the orientation opposite to the standard one, then ∫ M ω {\displaystyle \int _{M}\omega } is defined as the negative of the right-hand side. Then we have the fundamental formula relating exterior derivative and integration: Here is a sketch of proof of the formula. If f {\displaystyle f} is a smooth function on R n {\displaystyle \mathbb {R} ^{n}} with compact support, then we have: ∫ d ( f ω ) = 0 {\displaystyle \int d(f\omega )=0} (since, by the fundamental theorem of calculus, the above can be evaluated on boundaries of the set containing the support.) On the other hand, ∫ d ( f ω ) = ∫ d f ∧ ω + ∫ f d ω . {\displaystyle \int d(f\omega )=\int df\wedge \omega +\int f\,d\omega .} Let f {\displaystyle f} approach the characteristic function on M {\displaystyle M} . Then the second term on the right goes to ∫ M d ω {\displaystyle \int _{M}d\omega } while the first goes to − ∫ ∂ M ω {\displaystyle -\int _{\partial M}\omega } , by the argument similar to proving the fundamental theorem of calculus. ◻ {\displaystyle \square } The formula generalizes the fundamental theorem of calculus as well as Stokes' theorem in multivariable calculus. Indeed, if M = [ a , b ] {\displaystyle M=[a,b]} is an interval and ω = f {\displaystyle \omega =f} , then d ω = f ′ d x {\displaystyle d\omega =f'\,dx} and the formula says: ∫ M f ′ d x = f ( b ) − f ( a ) {\displaystyle \int _{M}f'\,dx=f(b)-f(a)} . Similarly, if M {\displaystyle M} is an oriented bounded surface in R 3 {\displaystyle \mathbb {R} ^{3}} and ω = f d x + g d y + h d z {\displaystyle \omega =f\,dx+g\,dy+h\,dz} , then d ( f d x ) = d f ∧ d x = ∂ f ∂ y d y ∧ d x + ∂ f ∂ z d z ∧ d x {\displaystyle d(f\,dx)=df\wedge dx={\frac {\partial f}{\partial y}}\,dy\wedge dx+{\frac {\partial f}{\partial z}}\,dz\wedge dx} and similarly for d ( g d y ) {\displaystyle d(g\,dy)} and d ( g d y ) {\displaystyle d(g\,dy)} . Collecting the terms, we thus get: d ω = ( ∂ h ∂ y − ∂ g ∂ z ) d y ∧ d z + ( ∂ f ∂ z − ∂ h ∂ x ) d z ∧ d x + ( ∂ g ∂ x − ∂ f ∂ y ) d x ∧ d y . {\displaystyle d\omega =\left({\frac {\partial h}{\partial y}}-{\frac {\partial g}{\partial z}}\right)dy\wedge dz+\left({\frac {\partial f}{\partial z}}-{\frac {\partial h}{\partial x}}\right)dz\wedge dx+\left({\frac {\partial g}{\partial x}}-{\frac {\partial f}{\partial y}}\right)dx\wedge dy.} Then, from the definition of the integration of ω {\displaystyle \omega } , we have ∫ M d ω = ∫ M ( ∇ × F ) ⋅ d S {\displaystyle \int _{M}d\omega =\int _{M}(\nabla \times F)\cdot dS} where F = ( f , g , h ) {\displaystyle F=(f,g,h)} is the vector-valued function and ∇ = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) {\displaystyle \nabla =\left({\frac {\partial }{\partial x}},{\frac {\partial }{\partial y}},{\frac {\partial }{\partial z}}\right)} . Hence, Stokes’ formula becomes ∫ M ( ∇ × F ) ⋅ d S = ∫ ∂ M ( f d x + g d y + h d z ) , {\displaystyle \int _{M}(\nabla \times F)\cdot dS=\int _{\partial M}(f\,dx+g\,dy+h\,dz),} which is the usual form of the Stokes' theorem on surfaces. Green’s theorem is also a special case of Stokes’ formula. Stokes' formula also yields a general version of Cauchy's integral formula. To state and prove it, for the complex variable z = x + i y {\displaystyle z=x+iy} and the conjugate z ¯ {\displaystyle {\bar {z}}} , let us introduce the operators ∂ ∂ z = 1 2 ( ∂ ∂ x − i ∂ ∂ y ) , ∂ ∂ z ¯ = 1 2 ( ∂ ∂ x + i ∂ ∂ y ) . {\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right),\,{\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right).} In these notations, a function f {\displaystyle f} is holomorphic (complex-analytic) if and only if ∂ f ∂ z ¯ = 0 {\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}=0} (the Cauchy–Riemann equations). Also, we have: d f = ∂ f ∂ z d z + ∂ f ∂ z ¯ d z ¯ . {\displaystyle df={\frac {\partial f}{\partial z}}dz+{\frac {\partial f}{\partial {\bar {z}}}}d{\bar {z}}.} Let D ϵ = { z ∈ C ∣ ϵ < | z − z 0 | < r } {\displaystyle D_{\epsilon }=\{z\in \mathbb {C} \mid \epsilon <|z-z_{0}|<r\}} be a punctured disk with center z 0 {\displaystyle z_{0}} . Since 1 / ( z − z 0 ) {\displaystyle 1/(z-z_{0})} is holomorphic on D ϵ {\displaystyle D_{\epsilon }} , We have: d ( f z − z 0 d z ) = ∂ f ∂ z ¯ d z ¯ ∧ d z z − z 0 {\displaystyle d\left({\frac {f}{z-z_{0}}}dz\right)={\frac {\partial f}{\partial {\bar {z}}}}{\frac {d{\bar {z}}\wedge dz}{z-z_{0}}}} . By Stokes’ formula, ∫ D ϵ ∂ f ∂ z ¯ d z ¯ ∧ d z z − z 0 = ( ∫ | z − z 0 | = r − ∫ | z − z 0 | = ϵ ) f z − z 0 d z . {\displaystyle \int _{D_{\epsilon }}{\frac {\partial f}{\partial {\bar {z}}}}{\frac {d{\bar {z}}\wedge dz}{z-z_{0}}}=\left(\int _{|z-z_{0}|=r}-\int _{|z-z_{0}|=\epsilon }\right){\frac {f}{z-z_{0}}}dz.} Letting ϵ → 0 {\displaystyle \epsilon \to 0} we then get: 2 π i f ( z 0 ) = ∫ | z − z 0 | = r f z − z 0 d z + ∫ | z − z 0 | ≤ r ∂ f ∂ z ¯ d z ∧ d z ¯ z − z 0 . {\displaystyle 2\pi i\,f(z_{0})=\int _{|z-z_{0}|=r}{\frac {f}{z-z_{0}}}dz+\int _{|z-z_{0}|\leq r}{\frac {\partial f}{\partial {\bar {z}}}}{\frac {dz\wedge d{\bar {z}}}{z-z_{0}}}.} === Winding numbers and Poincaré lemma === A differential form ω {\displaystyle \omega } is called closed if d ω = 0 {\displaystyle d\omega =0} and is called exact if ω = d η {\displaystyle \omega =d\eta } for some differential form η {\displaystyle \eta } (often called a potential). Since d ∘ d = 0 {\displaystyle d\circ d=0} , an exact form is closed. But the converse does not hold in general; there might be a non-exact closed form. A classic example of such a form is: ω = − y x 2 + y 2 + x x 2 + y 2 {\displaystyle \omega ={\frac {-y}{x^{2}+y^{2}}}+{\frac {x}{x^{2}+y^{2}}}} , which is a differential form on R 2 − 0 {\displaystyle \mathbb {R} ^{2}-0} . Suppose we switch to polar coordinates: x = r cos ⁡ θ , y = r sin ⁡ θ {\displaystyle x=r\cos \theta ,y=r\sin \theta } where r = x 2 + y 2 {\displaystyle r={\sqrt {x^{2}+y^{2}}}} . Then ω = r − 2 ( − r sin ⁡ θ d x + r cos ⁡ θ d y ) = d θ . {\displaystyle \omega =r^{-2}(-r\sin \theta \,dx+r\cos \theta \,dy)=d\theta .} This does not show that ω {\displaystyle \omega } is exact: the trouble is that θ {\displaystyle \theta } is not a well-defined continuous function on R 2 − 0 {\displaystyle \mathbb {R} ^{2}-0} . Since any function f {\displaystyle f} on R 2 − 0 {\displaystyle \mathbb {R} ^{2}-0} with d f = ω {\displaystyle df=\omega } differ from θ {\displaystyle \theta } by constant, this means that ω {\displaystyle \omega } is not exact. The calculation, however, shows that ω {\displaystyle \omega } is exact, for example, on R 2 − { x = 0 } {\displaystyle \mathbb {R} ^{2}-\{x=0\}} since we can take θ = arctan ⁡ ( y / x ) {\displaystyle \theta =\arctan(y/x)} there. There is a result (Poincaré lemma) that gives a condition that guarantees closed forms are exact. To state it, we need some notions from topology. Given two continuous maps f , g : X → Y {\displaystyle f,g:X\to Y} between subsets of R m , R n {\displaystyle \mathbb {R} ^{m},\mathbb {R} ^{n}} (or more generally topological spaces), a homotopy from f {\displaystyle f} to g {\displaystyle g} is a continuous function H : X × [ 0 , 1 ] → Y {\displaystyle H:X\times [0,1]\to Y} such that f ( x ) = H ( x , 0 ) {\displaystyle f(x)=H(x,0)} and g ( x ) = H ( x , 1 ) {\displaystyle g(x)=H(x,1)} . Intuitively, a homotopy is a continuous variation of one function to another. A loop in a set X {\displaystyle X} is a curve whose starting point coincides with the end point; i.e., c : [ 0 , 1 ] → X {\displaystyle c:[0,1]\to X} such that c ( 0 ) = c ( 1 ) {\displaystyle c(0)=c(1)} . Then a subset of R n {\displaystyle \mathbb {R} ^{n}} is called simply connected if every loop is homotopic to a constant function. A typical example of a simply connected set is a disk D = { ( x , y ) ∣ x 2 + y 2 ≤ r } ⊂ R 2 {\displaystyle D=\{(x,y)\mid {\sqrt {x^{2}+y^{2}}}\leq r\}\subset \mathbb {R} ^{2}} . Indeed, given a loop c : [ 0 , 1 ] → D {\displaystyle c:[0,1]\to D} , we have the homotopy H : [ 0 , 1 ] 2 → D , H ( x , t ) = ( 1 − t ) c ( x ) + t c ( 0 ) {\displaystyle H:[0,1]^{2}\to D,\,H(x,t)=(1-t)c(x)+tc(0)} from c {\displaystyle c} to the constant function c ( 0 ) {\displaystyle c(0)} . A punctured disk, on the other hand, is not simply connected. == Geometry of curves and surfaces == === Moving frame === Vector fields E 1 , … , E 3 {\displaystyle E_{1},\dots ,E_{3}} on R 3 {\displaystyle \mathbb {R} ^{3}} are called a frame field if they are orthogonal to each other at each point; i.e., E i ⋅ E j = δ i j {\displaystyle E_{i}\cdot E_{j}=\delta _{ij}} at each point. The basic example is the standard frame U i {\displaystyle U_{i}} ; i.e., U i ( x ) {\displaystyle U_{i}(x)} is a standard basis for each point x {\displaystyle x} in R 3 {\displaystyle \mathbb {R} ^{3}} . Another example is the cylindrical frame E 1 = cos ⁡ θ U 1 + sin ⁡ θ U 2 , E 2 = − sin ⁡ θ U 1 + cos ⁡ θ U 2 , E 3 = U 3 . {\displaystyle E_{1}=\cos \theta U_{1}+\sin \theta U_{2},\,E_{2}=-\sin \theta U_{1}+\cos \theta U_{2},\,E_{3}=U_{3}.} For the study of the geometry of a curve, the important frame to use is a Frenet frame T , N , B {\displaystyle T,N,B} on a unit-speed curve β : I → R 3 {\displaystyle \beta :I\to \mathbb {R} ^{3}} given as: === The Gauss–Bonnet theorem === The Gauss–Bonnet theorem relates the topology of a surface and its geometry. == Calculus of variations == === Method of Lagrange multiplier === The set g − 1 ( 0 ) {\displaystyle g^{-1}(0)} is usually called a constraint. Example: Suppose we want to find the minimum distance between the circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} and the line x + y = 4 {\displaystyle x+y=4} . That means that we want to minimize the function f ( x , y , u , v ) = ( x − u ) 2 + ( y − v ) 2 {\displaystyle f(x,y,u,v)=(x-u)^{2}+(y-v)^{2}} , the square distance between a point ( x , y ) {\displaystyle (x,y)} on the circle and a point ( u , v ) {\displaystyle (u,v)} on the line, under the constraint g = ( x 2 + y 2 − 1 , u + v − 4 ) {\displaystyle g=(x^{2}+y^{2}-1,u+v-4)} . We have: ∇ f = ( 2 ( x − u ) , 2 ( y − v ) , − 2 ( x − u ) , − 2 ( y − v ) ) . {\displaystyle \nabla f=(2(x-u),2(y-v),-2(x-u),-2(y-v)).} ∇ g 1 = ( 2 x , 2 y , 0 , 0 ) , ∇ g 2 = ( 0 , 0 , 1 , 1 ) . {\displaystyle \nabla g_{1}=(2x,2y,0,0),\nabla g_{2}=(0,0,1,1).} Since the Jacobian matrix of g {\displaystyle g} has rank 2 everywhere on g − 1 ( 0 ) {\displaystyle g^{-1}(0)} , the Lagrange multiplier gives: x − u = λ 1 x , y − v = λ 1 y , 2 ( x − u ) = − λ 2 , 2 ( y − v ) = − λ 2 . {\displaystyle x-u=\lambda _{1}x,\,y-v=\lambda _{1}y,\,2(x-u)=-\lambda _{2},\,2(y-v)=-\lambda _{2}.} If λ 1 = 0 {\displaystyle \lambda _{1}=0} , then x = u , y = v {\displaystyle x=u,y=v} , not possible. Thus, λ 1 ≠ 0 {\displaystyle \lambda _{1}\neq 0} and x = x − u λ 1 , y = y − v λ 1 . {\displaystyle x={\frac {x-u}{\lambda _{1}}},\,y={\frac {y-v}{\lambda _{1}}}.} From this, it easily follows that x = y = 1 / 2 {\displaystyle x=y=1/{\sqrt {2}}} and u = v = 2 {\displaystyle u=v=2} . Hence, the minimum distance is 2 2 − 1 {\displaystyle 2{\sqrt {2}}-1} (as a minimum distance clearly exists). Here is an application to linear algebra. Let V {\displaystyle V} be a finite-dimensional real vector space and T : V → V {\displaystyle T:V\to V} a self-adjoint operator. We shall show V {\displaystyle V} has a basis consisting of eigenvectors of T {\displaystyle T} (i.e., T {\displaystyle T} is diagonalizable) by induction on the dimension of V {\displaystyle V} . Choosing a basis on V {\displaystyle V} we can identify V = R n {\displaystyle V=\mathbb {R} ^{n}} and T {\displaystyle T} is represented by the matrix [ a i j ] {\displaystyle [a_{ij}]} . Consider the function f ( x ) = ( T x , x ) {\displaystyle f(x)=(Tx,x)} , where the bracket means the inner product. Then ∇ f = 2 ( ∑ a 1 i x i , … , ∑ a n i x i ) {\displaystyle \nabla f=2(\sum a_{1i}x_{i},\dots ,\sum a_{ni}x_{i})} . On the other hand, for g = ∑ x i 2 − 1 {\displaystyle g=\sum x_{i}^{2}-1} , since g − 1 ( 0 ) {\displaystyle g^{-1}(0)} is compact, f {\displaystyle f} attains a maximum or minimum at a point u {\displaystyle u} in g − 1 ( 0 ) {\displaystyle g^{-1}(0)} . Since ∇ g = 2 ( x 1 , … , x n ) {\displaystyle \nabla g=2(x_{1},\dots ,x_{n})} , by Lagrange multiplier, we find a real number λ {\displaystyle \lambda } such that 2 ∑ i a j i u i = 2 λ u j , 1 ≤ j ≤ n . {\displaystyle 2\sum _{i}a_{ji}u_{i}=2\lambda u_{j},1\leq j\leq n.} But that means T u = λ u {\displaystyle Tu=\lambda u} . By inductive hypothesis, the self-adjoint operator T : W → W {\displaystyle T:W\to W} , W {\displaystyle W} the orthogonal complement to u {\displaystyle u} , has a basis consisting of eigenvectors. Hence, we are done. ◻ {\displaystyle \square } . === Weak derivatives === Up to measure-zero sets, two functions can be determined to be equal or not by means of integration against other functions (called test functions). Namely, the following sometimes called the fundamental lemma of calculus of variations: Given a continuous function f {\displaystyle f} , by the lemma, a continuously differentiable function u {\displaystyle u} is such that ∂ u ∂ x i = f {\displaystyle {\frac {\partial u}{\partial x_{i}}}=f} if and only if ∫ ∂ u ∂ x i φ d x = ∫ f φ d x {\displaystyle \int {\frac {\partial u}{\partial x_{i}}}\varphi \,dx=\int f\varphi \,dx} for every φ ∈ C c ∞ ( M ) {\displaystyle \varphi \in C_{c}^{\infty }(M)} . But, by integration by parts, the partial derivative on the left-hand side of u {\displaystyle u} can be moved to that of φ {\displaystyle \varphi } ; i.e., − ∫ u ∂ φ ∂ x i d x = ∫ f φ d x {\displaystyle -\int u{\frac {\partial \varphi }{\partial x_{i}}}\,dx=\int f\varphi \,dx} where there is no boundary term since φ {\displaystyle \varphi } has compact support. Now the key point is that this expression makes sense even if u {\displaystyle u} is not necessarily differentiable and thus can be used to give sense to a derivative of such a function. Note each locally integrable function u {\displaystyle u} defines the linear functional φ ↦ ∫ u φ d x {\displaystyle \varphi \mapsto \int u\varphi \,dx} on C c ∞ ( M ) {\displaystyle C_{c}^{\infty }(M)} and, moreover, each locally integrable function can be identified with such linear functional, because of the early lemma. Hence, quite generally, if u {\displaystyle u} is a linear functional on C c ∞ ( M ) {\displaystyle C_{c}^{\infty }(M)} , then we define ∂ u ∂ x i {\displaystyle {\frac {\partial u}{\partial x_{i}}}} to be the linear functional φ ↦ − ⟨ u , ∂ φ ∂ x i ⟩ {\displaystyle \varphi \mapsto -\left\langle u,{\frac {\partial \varphi }{\partial x_{i}}}\right\rangle } where the bracket means ⟨ α , φ ⟩ = α ( φ ) {\displaystyle \langle \alpha ,\varphi \rangle =\alpha (\varphi )} . It is then called the weak derivative of u {\displaystyle u} with respect to x i {\displaystyle x_{i}} . If u {\displaystyle u} is continuously differentiable, then the weak derivate of it coincides with the usual one; i.e., the linear functional ∂ u ∂ x i {\displaystyle {\frac {\partial u}{\partial x_{i}}}} is the same as the linear functional determined by the usual partial derivative of u {\displaystyle u} with respect to x i {\displaystyle x_{i}} . A usual derivative is often then called a classical derivative. When a linear functional on C c ∞ ( M ) {\displaystyle C_{c}^{\infty }(M)} is continuous with respect to a certain topology on C c ∞ ( M ) {\displaystyle C_{c}^{\infty }(M)} , such a linear functional is called a distribution, an example of a generalized function. A classic example of a weak derivative is that of the Heaviside function H {\displaystyle H} , the characteristic function on the interval ( 0 , ∞ ) {\displaystyle (0,\infty )} . For every test function φ {\displaystyle \varphi } , we have: ⟨ H ′ , φ ⟩ = − ∫ 0 ∞ φ ′ d x = φ ( 0 ) . {\displaystyle \langle H',\varphi \rangle =-\int _{0}^{\infty }\varphi '\,dx=\varphi (0).} Let δ a {\displaystyle \delta _{a}} denote the linear functional φ ↦ φ ( a ) {\displaystyle \varphi \mapsto \varphi (a)} , called the Dirac delta function (although not exactly a function). Then the above can be written as: H ′ = δ 0 . {\displaystyle H'=\delta _{0}.} Cauchy's integral formula has a similar interpretation in terms of weak derivatives. For the complex variable z = x + i y {\displaystyle z=x+iy} , let E z 0 ( z ) = 1 π ( z − z 0 ) {\displaystyle E_{z_{0}}(z)={\frac {1}{\pi (z-z_{0})}}} . For a test function φ {\displaystyle \varphi } , if the disk | z − z 0 | ≤ r {\displaystyle |z-z_{0}|\leq r} contains the support of φ {\displaystyle \varphi } , by Cauchy's integral formula, we have: φ ( z 0 ) = 1 2 π i ∫ ∂ φ ∂ z ¯ d z ∧ d z ¯ z − z 0 . {\displaystyle \varphi (z_{0})={1 \over 2\pi i}\int {\frac {\partial \varphi }{\partial {\bar {z}}}}{\frac {dz\wedge d{\bar {z}}}{z-z_{0}}}.} Since d z ∧ d z ¯ = − 2 i d x ∧ d y {\displaystyle dz\wedge d{\bar {z}}=-2idx\wedge dy} , this means: φ ( z 0 ) = − ∫ E z 0 ∂ φ ∂ z ¯ d x d y = ⟨ ∂ E z 0 ∂ z ¯ , φ ⟩ , {\displaystyle \varphi (z_{0})=-\int E_{z_{0}}{\frac {\partial \varphi }{\partial {\bar {z}}}}dxdy=\left\langle {\frac {\partial E_{z_{0}}}{\partial {\bar {z}}}},\varphi \right\rangle ,} or ∂ E z 0 ∂ z ¯ = δ z 0 . {\displaystyle {\frac {\partial E_{z_{0}}}{\partial {\bar {z}}}}=\delta _{z_{0}}.} In general, a generalized function is called a fundamental solution for a linear partial differential operator if the application of the operator to it is the Dirac delta. Hence, the above says E z 0 {\displaystyle E_{z_{0}}} is the fundamental solution for the differential operator ∂ / ∂ z ¯ {\displaystyle \partial /\partial {\bar {z}}} . === Hamilton–Jacobi theory === == Calculus on manifolds == === Definition of a manifold === This section requires some background in general topology. A manifold is a Hausdorff topological space that is locally modeled by an Euclidean space. By definition, an atlas of a topological space M {\displaystyle M} is a set of maps φ i : U i → R n {\displaystyle \varphi _{i}:U_{i}\to \mathbb {R} ^{n}} , called charts, such that U i {\displaystyle U_{i}} are an open cover of M {\displaystyle M} ; i.e., each U i {\displaystyle U_{i}} is open and M = ∪ i U i {\displaystyle M=\cup _{i}U_{i}} , φ i : U i → φ i ( U i ) {\displaystyle \varphi _{i}:U_{i}\to \varphi _{i}(U_{i})} is a homeomorphism and φ j ∘ φ i − 1 : φ i ( U i ∩ U j ) → φ j ( U i ∩ U j ) {\displaystyle \varphi _{j}\circ \varphi _{i}^{-1}:\varphi _{i}(U_{i}\cap U_{j})\to \varphi _{j}(U_{i}\cap U_{j})} is smooth; thus a diffeomorphism. By definition, a manifold is a second-countable Hausdorff topological space with a maximal atlas (called a differentiable structure); "maximal" means that it is not contained in strictly larger atlas. The dimension of the manifold M {\displaystyle M} is the dimension of the model Euclidean space R n {\displaystyle \mathbb {R} ^{n}} ; namely, n {\displaystyle n} and a manifold is called an n-manifold when it has dimension n. A function on a manifold M {\displaystyle M} is said to be smooth if f | U ∘ φ − 1 {\displaystyle f|_{U}\circ \varphi ^{-1}} is smooth on φ ( U ) {\displaystyle \varphi (U)} for each chart φ : U → R n {\displaystyle \varphi :U\to \mathbb {R} ^{n}} in the differentiable structure. A manifold is paracompact; this has an implication that it admits a partition of unity subordinate to a given open cover. If R n {\displaystyle \mathbb {R} ^{n}} is replaced by an upper half-space H n {\displaystyle \mathbb {H} ^{n}} , then we get the notion of a manifold-with-boundary. The set of points that map to the boundary of H n {\displaystyle \mathbb {H} ^{n}} under charts is denoted by ∂ M {\displaystyle \partial M} and is called the boundary of M {\displaystyle M} . This boundary may not be the topological boundary of M {\displaystyle M} . Since the interior of H n {\displaystyle \mathbb {H} ^{n}} is diffeomorphic to R n {\displaystyle \mathbb {R} ^{n}} , a manifold is a manifold-with-boundary with empty boundary. The next theorem furnishes many examples of manifolds. For example, for g ( x ) = x 1 2 + ⋯ + x n + 1 2 − 1 {\displaystyle g(x)=x_{1}^{2}+\cdots +x_{n+1}^{2}-1} , the derivative g ′ ( x ) = [ 2 x 1 2 x 2 ⋯ 2 x n + 1 ] {\displaystyle g'(x)={\begin{bmatrix}2x_{1}&2x_{2}&\cdots &2x_{n+1}\end{bmatrix}}} has rank one at every point p {\displaystyle p} in g − 1 ( 0 ) {\displaystyle g^{-1}(0)} . Hence, the n-sphere g − 1 ( 0 ) {\displaystyle g^{-1}(0)} is an n-manifold. The theorem is proved as a corollary of the inverse function theorem. Many familiar manifolds are subsets of R n {\displaystyle \mathbb {R} ^{n}} . The next theoretically important result says that there is no other kind of manifolds. An immersion is a smooth map whose differential is injective. An embedding is an immersion that is homeomorphic (thus diffeomorphic) to the image. The proof that a manifold can be embedded into R N {\displaystyle \mathbb {R} ^{N}} for some N is considerably easier and can be readily given here. It is known that a manifold has a finite atlas { φ i : U i → R n ∣ 1 ≤ i ≤ r } {\displaystyle \{\varphi _{i}:U_{i}\to \mathbb {R} ^{n}\mid 1\leq i\leq r\}} . Let λ i {\displaystyle \lambda _{i}} be smooth functions such that Supp ⁡ ( λ i ) ⊂ U i {\displaystyle \operatorname {Supp} (\lambda _{i})\subset U_{i}} and { λ i = 1 } {\displaystyle \{\lambda _{i}=1\}} cover M {\displaystyle M} (e.g., a partition of unity). Consider the map f = ( λ 1 φ 1 , … , λ r φ r , λ 1 , … , λ r ) : M → R ( k + 1 ) r {\displaystyle f=(\lambda _{1}\varphi _{1},\dots ,\lambda _{r}\varphi _{r},\lambda _{1},\dots ,\lambda _{r}):M\to \mathbb {R} ^{(k+1)r}} It is easy to see that f {\displaystyle f} is an injective immersion. It may not be an embedding. To fix that, we shall use: ( f , g ) : M → R ( k + 1 ) r + 1 {\displaystyle (f,g):M\to \mathbb {R} ^{(k+1)r+1}} where g {\displaystyle g} is a smooth proper map. The existence of a smooth proper map is a consequence of a partition of unity. See [1] for the rest of the proof in the case of an immersion. ◻ {\displaystyle \square } Nash's embedding theorem says that, if M {\displaystyle M} is equipped with a Riemannian metric, then the embedding can be taken to be isometric with an expense of increasing 2 k {\displaystyle 2k} ; for this, see this T. Tao's blog. === Tubular neighborhood and transversality === A technically important result is: This can be proved by putting a Riemannian metric on the manifold M {\displaystyle M} . Indeed, the choice of metric makes the normal bundle ν i {\displaystyle \nu _{i}} a complementary bundle to T N {\displaystyle TN} ; i.e., T M | N {\displaystyle TM|_{N}} is the direct sum of T N {\displaystyle TN} and ν N {\displaystyle \nu _{N}} . Then, using the metric, we have the exponential map exp : U → V {\displaystyle \exp :U\to V} for some neighborhood U {\displaystyle U} of N {\displaystyle N} in the normal bundle ν N {\displaystyle \nu _{N}} to some neighborhood V {\displaystyle V} of N {\displaystyle N} in M {\displaystyle M} . The exponential map here may not be injective but it is possible to make it injective (thus diffeomorphic) by shrinking U {\displaystyle U} (for now, see [2]). === Integration on manifolds and distribution densities === The starting point for the topic of integration on manifolds is that there is no invariant way to integrate functions on manifolds. This may be obvious if we asked: what is an integration of functions on a finite-dimensional real vector space? (In contrast, there is an invariant way to do differentiation since, by definition, a manifold comes with a differentiable structure). There are several ways to introduce integration theory to manifolds: Integrate differential forms. Do integration against some measure. Equip a manifold with a Riemannian metric and do integration against such a metric. For example, if a manifold is embedded into an Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , then it acquires the Lebesgue measure restricting from the ambient Euclidean space and then the second approach works. The first approach is fine in many situations but it requires the manifold to be oriented (and there is a non-orientable manifold that is not pathological). The third approach generalizes and that gives rise to the notion of a density. == Generalizations == === Extensions to infinite-dimensional normed spaces === The notions like differentiability extend to normed spaces. == See also == Differential geometry of surfaces Integration along fibers Lusin's theorem Density on a manifold == Notes == == Citations == == References == do Carmo, Manfredo P. (1976), Differential Geometry of Curves and Surfaces, Prentice-Hall, ISBN 978-0-13-212589-5 Edwards, Charles Henry (1994) [1973], Advanced Calculus of Several Variables, Mineola, New York: Dover Publications, ISBN 0-486-68336-2 Folland, Gerald, Real Analysis: Modern Techniques and Their Applications (2nd ed.) Cartan, Henri (1971), Calcul Differentiel (in French), Hermann, ISBN 9780395120330 Hirsch, Morris (1994), Differential Topology (2nd ed.), Springer-Verlag Hörmander, Lars (2015), The Analysis of Linear Partial Differential Operators I: Distribution Theory and Fourier Analysis, Classics in Mathematics (2nd ed.), Springer, ISBN 9783642614972 Loomis, Lynn Harold; Sternberg, Shlomo (1968), Advanced Calculus, Addison-Wesley (revised 1990, Jones and Bartlett; reprinted 2014, World Scientific) [this text in particular discusses density] O'Neill, Barrett (2006), Elementary Differential Geometry (revised 2nd ed.), Amsterdam: Elsevier/Academic Press, ISBN 0-12-088735-5 Rudin, Walter (1976) [1953], Principles of Mathematical Analysis (3rd ed.), New York: McGraw Hill, pp. 204–299, ISBN 978-0-07-054235-8 Spivak, Michael (1965). Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus. San Francisco: Benjamin Cummings. ISBN 0-8053-9021-9.
Wikipedia/Calculus_on_Euclidean_space
Actuarial science is the discipline that applies mathematical and statistical methods to assess risk in insurance, pension, finance, investment and other industries and professions. Actuaries are professionals trained in this discipline. In many countries, actuaries must demonstrate their competence by passing a series of rigorous professional examinations focused in fields such as probability and predictive analysis. Actuarial science includes a number of interrelated subjects, including mathematics, probability theory, statistics, finance, economics, financial accounting and computer science. Historically, actuarial science used deterministic models in the construction of tables and premiums. The science has gone through revolutionary changes since the 1980s due to the proliferation of high speed computers and the union of stochastic actuarial models with modern financial theory. Many universities have undergraduate and graduate degree programs in actuarial science. In 2010, a study published by job search website CareerCast ranked actuary as the #1 job in the United States. The study used five key criteria to rank jobs: environment, income, employment outlook, physical demands, and stress. In 2024, U.S. News & World Report ranked actuary as the third-best job in the business sector and the eighth-best job in STEM. == Subfields == === Life insurance, pensions and healthcare === Actuarial science became a formal mathematical discipline in the late 17th century with the increased demand for long-term insurance coverage such as burial, life insurance, and annuities. These long term coverages required that money be set aside to pay future benefits, such as annuity and death benefits many years into the future. This requires estimating future contingent events, such as the rates of mortality by age, as well as the development of mathematical techniques for discounting the value of funds set aside and invested. This led to the development of an important actuarial concept, referred to as the present value of a future sum. Certain aspects of the actuarial methods for discounting pension funds have come under criticism from modern financial economics. In traditional life insurance, actuarial science focuses on the analysis of mortality, the production of life tables, and the application of compound interest to produce life insurance, annuities and endowment policies. Contemporary life insurance programs have been extended to include credit and mortgage insurance, key person insurance for small businesses, long term care insurance and health savings accounts. In health insurance, including insurance provided directly by employers, and social insurance, actuarial science focuses on the analysis of rates of disability, morbidity, mortality, fertility and other contingencies. The effects of consumer choice and the geographical distribution of the utilization of medical services and procedures, and the utilization of drugs and therapies, is also of great importance. These factors underlay the development of the Resource-Base Relative Value Scale (RBRVS) at Harvard in a multi-disciplined study. Actuarial science also aids in the design of benefit structures, reimbursement standards, and the effects of proposed government standards on the cost of healthcare. In the pension industry, actuarial methods are used to measure the costs of alternative strategies with regard to the design, funding, accounting, administration, and maintenance or redesign of pension plans. The strategies are greatly influenced by short-term and long-term bond rates, the funded status of the pension and benefit arrangements, collective bargaining; the employer's old, new and foreign competitors; the changing demographics of the workforce; changes in the internal revenue code; changes in the attitude of the internal revenue service regarding the calculation of surpluses; and equally importantly, both the short and long term financial and economic trends. It is common with mergers and acquisitions that several pension plans have to be combined or at least administered on an equitable basis. When benefit changes occur, old and new benefit plans have to be blended, satisfying new social demands and various government discrimination test calculations, and providing employees and retirees with understandable choices and transition paths. Benefit plans liabilities have to be properly valued, reflecting both earned benefits for past service, and the benefits for future service. Finally, funding schemes have to be developed that are manageable and satisfy the standards board or regulators of the appropriate country, such as the Financial Accounting Standards Board in the United States. In social welfare programs, the Office of the Chief Actuary (OCACT), Social Security Administration plans and directs a program of actuarial estimates and analyses relating to SSA-administered retirement, survivors and disability insurance programs and to proposed changes in those programs. It evaluates operations of the Federal Old-Age and Survivors Insurance Trust Fund and the Federal Disability Insurance Trust Fund, conducts studies of program financing, performs actuarial and demographic research on social insurance and related program issues involving mortality, morbidity, utilization, retirement, disability, survivorship, marriage, unemployment, poverty, old age, families with children, etc., and projects future workloads. In addition, the Office is charged with conducting cost analyses relating to the Supplemental Security Income (SSI) program, a general-revenue financed, means-tested program for low-income aged, blind and disabled people. The office provides technical and consultative services to the Commissioner, to the board of trustees of the Social Security Trust Funds, and its staff appears before Congressional Committees to provide expert testimony on the actuarial aspects of Social Security issues. === Applications to other forms of insurance === Actuarial science is also applied to property, casualty, liability, and general insurance. In these forms of insurance, coverage is generally provided on a renewable period, (such as a yearly). Coverage can be cancelled at the end of the period by either party. Property and casualty insurance companies tend to specialize because of the complexity and diversity of risks. One division is to organize around personal and commercial lines of insurance. Personal lines of insurance are for individuals and include fire, auto, homeowners, theft and umbrella coverages. Commercial lines address the insurance needs of businesses and include property, business continuation, product liability, fleet/commercial vehicle, workers compensation, fidelity and surety, and D&O insurance. The insurance industry also provides coverage for exposures such as catastrophe, weather-related risks, earthquakes, patent infringement and other forms of corporate espionage, terrorism, and "one-of-a-kind" (e.g., satellite launch). Actuarial science provides data collection, measurement, estimating, forecasting, and valuation tools to provide financial and underwriting data for management to assess marketing opportunities and the nature of the risks. Actuarial science often helps to assess the overall risk from catastrophic events in relation to its underwriting capacity or surplus. In the reinsurance fields, actuarial science can be used to design and price reinsurance and retrocession arrangements, and to establish reserve funds for known claims and future claims and catastrophes. === Actuaries in criminal justice === There is an increasing trend to recognize that actuarial skills can be applied to a range of applications outside the traditional fields of insurance, pensions, etc. One notable example is the use in some US states of actuarial models to set criminal sentencing guidelines. These models attempt to predict the chance of re-offending according to rating factors which include the type of crime, age, educational background and ethnicity of the offender. However, these models have been open to criticism as providing justification for discrimination against specific ethnic groups by law enforcement personnel. Whether this is statistically correct or a self-fulfilling correlation remains under debate. Another example is the use of actuarial models to assess the risk of sex offense recidivism. Actuarial models and associated tables, such as the MnSOST-R, Static-99, and SORAG, have been used since the late 1990s to determine the likelihood that a sex offender will re-offend and thus whether he or she should be institutionalized or set free. === Actuarial science related to modern financial economics === Traditional actuarial science and modern financial economics in the US have different practices, which is caused by different ways of calculating funding and investment strategies, and by different regulations. Regulations are from the Armstrong investigation of 1905, the Glass–Steagall Act of 1932, the adoption of the Mandatory Security Valuation Reserve by the National Association of Insurance Commissioners, which cushioned market fluctuations, and the Financial Accounting Standards Board, (FASB) in the US and Canada, which regulates pensions valuations and funding. == History == Historically, much of the foundation of actuarial theory predated modern financial theory. In the early twentieth century, actuaries were developing many techniques that can be found in modern financial theory, but for various historical reasons, these developments did not achieve much recognition. As a result, actuarial science developed along a different path, becoming more reliant on assumptions, as opposed to the arbitrage-free risk-neutral valuation concepts used in modern finance. The divergence is not related to the use of historical data and statistical projections of liability cash flows, but is instead caused by the manner in which traditional actuarial methods apply market data with those numbers. For example, one traditional actuarial method suggests that changing the asset allocation mix of investments can change the value of liabilities and assets (by changing the discount rate assumption). This concept is inconsistent with financial economics. The potential of modern financial economics theory to complement existing actuarial science was recognized by actuaries in the mid-twentieth century. In the late 1980s and early 1990s, there was a distinct effort for actuaries to combine financial theory and stochastic methods into their established models. Ideas from financial economics became increasingly influential in actuarial thinking, and actuarial science has started to embrace more sophisticated mathematical modelling of finance. Today, the profession, both in practice and in the educational syllabi of many actuarial organizations, is cognizant of the need to reflect the combined approach of tables, loss models, stochastic methods, and financial theory. However, assumption-dependent concepts are still widely used (such as the setting of the discount rate assumption as mentioned earlier), particularly in North America. Product design adds another dimension to the debate. Financial economists argue that pension benefits are bond-like and should not be funded with equity investments without reflecting the risks of not achieving expected returns. But some pension products do reflect the risks of unexpected returns. In some cases, the pension beneficiary assumes the risk, or the employer assumes the risk. The current debate now seems to be focusing on four principles: financial models should be free of arbitrage. assets and liabilities with identical cash flows should have the same price. This is at odds with FASB. the value of an asset is independent of its financing. how pension assets should be invested Essentially, financial economics state that pension assets should not be invested in equities for a variety of theoretical and practical reasons. === Pre-formalisation === Elementary mutual aid agreements and pensions arose in antiquity. Early in the Roman empire, associations were formed to meet the expenses of burial, cremation, and monuments—precursors to burial insurance and friendly societies. A small sum was paid into a communal fund on a weekly basis, and upon the death of a member, the fund would cover the expenses of rites and burial. These societies sometimes sold shares in the building of columbāria, or burial vaults, owned by the fund—the precursor to mutual insurance companies. Other early examples of mutual surety and assurance pacts can be traced back to various forms of fellowship within the Saxon clans of England and their Germanic forebears, and to Celtic society. However, many of these earlier forms of surety and aid would often fail due to lack of understanding and knowledge. === Initial development === The 17th century was a period of advances in mathematics in Germany, France and England. At the same time there was a rapidly growing desire and need to place the valuation of personal risk on a more scientific basis. Independently of each other, compound interest was studied and probability theory emerged as a well-understood mathematical discipline. Another important advance came in 1662 from a London draper, the father of demography, John Graunt, who showed that there were predictable patterns of longevity and death in a group, or cohort, of people of the same age, despite the uncertainty of the date of death of any one individual. This study became the basis for the original life table. One could now set up an insurance scheme to provide life insurance or pensions for a group of people, and to calculate with some degree of accuracy how much each person in the group should contribute to a common fund assumed to earn a fixed rate of interest. The first person to demonstrate publicly how this could be done was Edmond Halley (of Halley's comet fame). Halley constructed his own life table, and showed how it could be used to calculate the premium amount someone of a given age should pay to purchase a life annuity. === Early actuaries === James Dodson's pioneering work on the long term insurance contracts under which the same premium is charged each year led to the formation of the Society for Equitable Assurances on Lives and Survivorship (now commonly known as Equitable Life) in London in 1762. William Morgan is often considered the father of modern actuarial science for his work in the field in the 1780s and 90s. Many other life insurance companies and pension funds were created over the following 200 years. Equitable Life was the first to use the word "actuary" for its chief executive officer in 1762. Previously, "actuary" meant an official who recorded the decisions, or "acts", of ecclesiastical courts. Other companies that did not use such mathematical and scientific methods most often failed or were forced to adopt the methods pioneered by Equitable. === Technological advances === In the 18th and 19th centuries, calculations were performed without computers. The computations of life insurance premiums and reserving requirements are rather complex, and actuaries developed techniques to make the calculations as easy as possible, for example "commutation functions" (essentially precalculated columns of summations over time of discounted values of survival and death probabilities). Actuarial organizations were founded to support and further both actuaries and actuarial science, and to protect the public interest by promoting competency and ethical standards. However, calculations remained cumbersome, and actuarial shortcuts were commonplace. Non-life actuaries followed in the footsteps of their life insurance colleagues during the 20th century. The 1920 revision for the New-York based National Council on Workmen's Compensation Insurance rates took over two months of around-the-clock work by day and night teams of actuaries. In the 1930s and 1940s, the mathematical foundations for stochastic processes were developed. Actuaries could now begin to estimate losses using models of random events, instead of the deterministic methods they had used in the past. The introduction and development of the computer further revolutionized the actuarial profession. From pencil-and-paper to punchcards to current high-speed devices, the modeling and forecasting ability of the actuary has rapidly improved, while still being heavily dependent on the assumptions input into the models, and actuaries needed to adjust to this new world . == See also == == References == === Works cited === === Bibliography === Charles L. Trowbridge (1989). "Fundamental Concepts of Actuarial Science" (PDF). Revised Edition. Actuarial Education and Research Fund. Archived from the original (PDF) on 2006-06-29. Retrieved 2006-06-28. == External links ==
Wikipedia/Actuarial_science
In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force F is constant and the angle θ between the force and the displacement s is also constant, then the work done is given by: W = F s cos ⁡ θ {\displaystyle W=Fs\cos {\theta }} If the force is variable, then work is given by the line integral: W = ∫ F ⋅ d s {\displaystyle W=\int \mathbf {F} \cdot d\mathbf {s} } where d s {\displaystyle d\mathbf {s} } is the tiny change in displacement vector. Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. == History == The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it. === Early concepts of work === Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote: Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's. === Etymology and modern usage === The term work (or mechanical work), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet. Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". The concept of virtual work, and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. == Units == The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889). According to the International Bureau of Weights and Measures it is defined as "the work done when the point of application of 1 MKS unit of force [newton] moves a distance of 1 metre in the direction of the force." The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit. == Work and energy == The work W done by a constant force of magnitude F on a point that moves a displacement s in a straight line in the direction of the force is the product W = F ⋅ s {\displaystyle W=\mathbf {F} \cdot \mathbf {s} } For example, if a force of 10 newtons (F = 10 N) acts along a point that travels 2 metres (s = 2 m), then W = Fs = (10 N) (2 m) = 20 J. This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy Ek corresponding to the linear velocity and angular velocity of that body, W = Δ E k . {\displaystyle W=\Delta E_{\text{k}}.} The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy Ep of the object, W = − Δ E p . {\displaystyle W=-\Delta E_{\text{p}}.} These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. == Constraint forces == Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is F = qv × B, where q is the charge, v is the velocity of the particle, and B is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v. The dot product of two perpendicular vectors is always zero, so the work W = F ⋅ v = 0, and the magnetic force does not do work. It can change the direction of motion but never change the speed. == Mathematical calculation == For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. Work is the result of a force on a point that follows a curve X, with a velocity v, at each instant. The small amount of work δW that occurs over an instant of time dt is calculated as δ W = F ⋅ d s = F ⋅ v d t {\displaystyle \delta W=\mathbf {F} \cdot d\mathbf {s} =\mathbf {F} \cdot \mathbf {v} dt} where the F ⋅ v is the power over the instant dt. The sum of these small amounts of work over the trajectory of the point yields the work, W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F ⋅ d s d t d t = ∫ C F ⋅ d s , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\tfrac {d\mathbf {s} }{dt}}\,dt=\int _{C}\mathbf {F} \cdot d\mathbf {s} ,} where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent. If the force is always directed along this line, and the magnitude of the force is F, then this integral simplifies to W = ∫ C F d s {\displaystyle W=\int _{C}F\,ds} where s is displacement along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to W = ∫ C F d s = F ∫ C d s = F s {\displaystyle W=\int _{C}F\,ds=F\int _{C}ds=Fs} where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F ⋅ ds = F cos θ ds, where θ is the angle between the force vector and the direction of movement, that is W = ∫ C F ⋅ d s = F s cos ⁡ θ . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} =Fs\cos \theta .} When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. === Work done by a variable force === Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (F cos(θ), where θ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. If the displacement as a variable of time is given by ∆x(t), then work done by the variable force from t1 to t2 is: W = ∫ t 1 t 2 F ( t ) ⋅ v ( t ) d t = ∫ t 1 t 2 P ( t ) d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} (t)\cdot \mathbf {v} (t)dt=\int _{t_{1}}^{t_{2}}P(t)dt.} Thus, the work done for a variable force can be expressed as a definite integral of power over time. === Torque and rotation === A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as δ W = T ⋅ ω d t , {\displaystyle \delta W=\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt,} where the T ⋅ ω is the power over the instant dt. The sum of these small amounts of work over the trajectory of the rigid body yields the work, W = ∫ t 1 t 2 T ⋅ ω d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt.} This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent. If the angular velocity vector maintains a constant direction, then it takes the form, ω = ϕ ˙ S , {\displaystyle {\boldsymbol {\omega }}={\dot {\phi }}\mathbf {S} ,} where ϕ {\displaystyle \phi } is the angle of rotation about the constant unit vector S. In this case, the work of the torque becomes, W = ∫ t 1 t 2 T ⋅ ω d t = ∫ t 1 t 2 T ⋅ S d ϕ d t d t = ∫ C T ⋅ S d ϕ , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot \mathbf {S} {\frac {d\phi }{dt}}dt=\int _{C}\mathbf {T} \cdot \mathbf {S} \,d\phi ,} where C is the trajectory from ϕ ( t 1 ) {\displaystyle \phi (t_{1})} to ϕ ( t 2 ) {\displaystyle \phi (t_{2})} . This integral depends on the rotational trajectory ϕ ( t ) {\displaystyle \phi (t)} , and is therefore path-dependent. If the torque τ {\displaystyle \tau } is aligned with the angular velocity vector so that, T = τ S , {\displaystyle \mathbf {T} =\tau \mathbf {S} ,} and both the torque and angular velocity are constant, then the work takes the form, W = ∫ t 1 t 2 τ ϕ ˙ d t = τ ( ϕ 2 − ϕ 1 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\tau {\dot {\phi }}\,dt=\tau (\phi _{2}-\phi _{1}).} This result can be understood more simply by considering the torque as arising from a force of constant magnitude F, being applied perpendicularly to a lever arm at a distance r {\displaystyle r} , as shown in the figure. This force will act through the distance along the circular arc l = s = r ϕ {\displaystyle l=s=r\phi } , so the work done is W = F s = F r ϕ . {\displaystyle W=Fs=Fr\phi .} Introduce the torque τ = Fr, to obtain W = F r ϕ = τ ϕ , {\displaystyle W=Fr\phi =\tau \phi ,} as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. == Work and potential energy == The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C = x(t), defines the work input to the system by the force. === Path dependence === Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral: W = ∫ C F ⋅ d x = ∫ t 1 t 2 F ⋅ v d t , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt,} where dx(t) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, d W d t = P ( t ) = F ⋅ v . {\displaystyle {\frac {dW}{dt}}=P(t)=\mathbf {F} \cdot \mathbf {v} .} === Path independence === If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function U(x), that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = ∫ x ( t 1 ) x ( t 2 ) F ⋅ d x = U ( x ( t 1 ) ) − U ( x ( t 2 ) ) . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{\mathbf {x} (t_{1})}^{\mathbf {x} (t_{2})}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} (t_{1}))-U(\mathbf {x} (t_{2})).} The function U(x) is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle \nabla W=-\nabla U=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential." Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-\nabla U\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} === Work by gravity === In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is g = 9.8 m⋅s−2 and the gravitational force on an object of mass m is Fg = mg. It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight mg is displaced upwards or downwards a vertical distance y2 − y1, the work W done on the object is: W = F g ( y 2 − y 1 ) = F g Δ y = m g Δ y {\displaystyle W=F_{g}(y_{2}-y_{1})=F_{g}\Delta y=mg\Delta y} where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. ==== Gravity in 3D space ==== The force of gravity exerted by a mass M on another mass m is given by F = − G M m r 2 r ^ = − G M m r 3 r , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }}=-{\frac {GMm}{r^{3}}}\mathbf {r} ,} where r is the position vector from M to m and r̂ is the unit vector in the direction of r. Let the mass m move at the velocity v; then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} Notice that the position and velocity of the mass m are given by r = r e r , v = d r d t = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\frac {d\mathbf {r} }{dt}}={\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m, and we use the fact that d e r / d t = θ ˙ e t . {\displaystyle d\mathbf {e} _{r}/dt={\dot {\theta }}\mathbf {e} _{t}.} Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{r})\cdot \left({\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t}\right)dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} The function U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy. === Work by a spring === Consider a spring that exerts a horizontal force F = (−kx, 0, 0) that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve X(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − 1 2 k x 2 . {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} dt=-\int _{0}^{t}kxv_{x}dt=-{\frac {1}{2}}kx^{2}.} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvxdt, over time t is ⁠1/2⁠x2. The work is the product of the distance times the spring force, which is also dependent on distance; hence the x2 result. === Work by a gas === The work W {\displaystyle W} done by a body of gas on its surroundings is: W = ∫ a b P d V {\displaystyle W=\int _{a}^{b}P\,dV} where P is pressure, V is volume, and a and b are initial and final volumes. == Work–energy principle == The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy E k {\displaystyle E_{\text{k}}} , W = Δ E k = 1 2 m v 2 2 − 1 2 m v 1 2 {\displaystyle W=\Delta E_{\text{k}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are the speeds of the particle before and after the work is done, and m is its mass. The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics. This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. === Derivation for a particle moving along a straight line === In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation F = ma (Newton's second law), and the particle displacement s can be expressed by the equation s = v 2 2 − v 1 2 2 a {\displaystyle s={\frac {v_{2}^{2}-v_{1}^{2}}{2a}}} which follows from v 2 2 = v 1 2 + 2 a s {\displaystyle v_{2}^{2}=v_{1}^{2}+2as} (see Equations of motion). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: W = F s = m a s = m a v 2 2 − v 1 2 2 a = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=ma{\frac {v_{2}^{2}-v_{1}^{2}}{2a}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} Other derivation: W = F s = m a s = m v 2 2 − v 1 2 2 s s = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=m{\frac {v_{2}^{2}-v_{1}^{2}}{2s}}s={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F v d t = ∫ t 1 t 2 m a v d t = m ∫ t 1 t 2 v d v d t d t = m ∫ v 1 v 2 v d v = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=\int _{t_{1}}^{t_{2}}F\,v\,dt=\int _{t_{1}}^{t_{2}}ma\,v\,dt=m\int _{t_{1}}^{t_{2}}v\,{\frac {dv}{dt}}\,dt=m\int _{v_{1}}^{v_{2}}v\,dv={\tfrac {1}{2}}m\left(v_{2}^{2}-v_{1}^{2}\right).} === General derivation of the work–energy principle for a particle === For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle: W = ∫ t 1 t 2 F ⋅ v d t = m ∫ t 1 t 2 a ⋅ v d t = m 2 ∫ t 1 t 2 d v 2 d t d t = m 2 ∫ v 1 2 v 2 2 d v 2 = m v 2 2 2 − m v 1 2 2 = Δ E k {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=m\int _{t_{1}}^{t_{2}}\mathbf {a} \cdot \mathbf {v} dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {dv^{2}}{dt}}\,dt={\frac {m}{2}}\int _{v_{1}^{2}}^{v_{2}^{2}}dv^{2}={\frac {mv_{2}^{2}}{2}}-{\frac {mv_{1}^{2}}{2}}=\Delta E_{\text{k}}} The identity a ⋅ v = 1 2 d v 2 d t {\textstyle \mathbf {a} \cdot \mathbf {v} ={\frac {1}{2}}{\frac {dv^{2}}{dt}}} requires some algebra. From the identity v 2 = v ⋅ v {\textstyle v^{2}=\mathbf {v} \cdot \mathbf {v} } and definition a = d v d t {\textstyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}} it follows d v 2 d t = d ( v ⋅ v ) d t = d v d t ⋅ v + v ⋅ d v d t = 2 d v d t ⋅ v = 2 a ⋅ v . {\displaystyle {\frac {dv^{2}}{dt}}={\frac {d(\mathbf {v} \cdot \mathbf {v} )}{dt}}={\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} +\mathbf {v} \cdot {\frac {d\mathbf {v} }{dt}}=2{\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} =2\mathbf {a} \cdot \mathbf {v} .} The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. === Derivation for a particle in constrained movement === In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory X(t) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R, then Newton's Law takes the form F + R = m X ¨ , {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }},} where m is the mass of the particle. ==== Vector formulation ==== Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields F ⋅ X ˙ = m X ¨ ⋅ X ˙ , {\displaystyle \mathbf {F} \cdot {\dot {\mathbf {X} }}=m{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X(t1) to the point X(t2) to obtain ∫ t 1 t 2 F ⋅ X ˙ d t = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t . {\displaystyle \int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt.} The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t1 to time t2. This can also be written as W = ∫ t 1 t 2 F ⋅ X ˙ d t = ∫ X ( t 1 ) X ( t 2 ) F ⋅ d X . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=\int _{\mathbf {X} (t_{1})}^{\mathbf {X} (t_{2})}\mathbf {F} \cdot d\mathbf {X} .} This integral is computed along the trajectory X(t) of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity 1 2 d d t ( X ˙ ⋅ X ˙ ) = X ¨ ⋅ X ˙ , {\displaystyle {\frac {1}{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})={\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, Δ K = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t = m 2 ∫ t 1 t 2 d d t ( X ˙ ⋅ X ˙ ) d t = m 2 X ˙ ⋅ X ˙ ( t 2 ) − m 2 X ˙ ⋅ X ˙ ( t 1 ) = 1 2 m Δ v 2 , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})dt={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{2})-{\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{1})={\frac {1}{2}}m\Delta \mathbf {v} ^{2},} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 X ˙ ⋅ X ˙ = 1 2 m v 2 {\displaystyle K={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}={\frac {1}{2}}m{\mathbf {v} ^{2}}} ==== Tangential and normal components ==== It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X(t), such that X ˙ = v T and X ¨ = v ˙ T + v 2 κ N , {\displaystyle {\dot {\mathbf {X} }}=v\mathbf {T} \quad {\text{and}}\quad {\ddot {\mathbf {X} }}={\dot {v}}\mathbf {T} +v^{2}\kappa \mathbf {N} ,} where v = | X ˙ | = X ˙ ⋅ X ˙ . {\displaystyle v=|{\dot {\mathbf {X} }}|={\sqrt {{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}}}.} Then, the scalar product of velocity with acceleration in Newton's second law takes the form Δ K = m ∫ t 1 t 2 v ˙ v d t = m 2 ∫ t 1 t 2 d d t v 2 d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\dot {v}}v\,dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}v^{2}\,dt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}),} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 v 2 = m 2 X ˙ ⋅ X ˙ . {\displaystyle K={\frac {m}{2}}v^{2}={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}.} The result is the work–energy principle for particle dynamics, W = Δ K . {\displaystyle W=\Delta K.} This derivation can be generalized to arbitrary rigid body systems. === Moving in a straight line (skid to a stop) === Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F. The constraint forces between the vehicle and the road define R, and we have F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} For convenience let the trajectory be along the X-axis, so X = (d, 0) and the velocity is V = (v, 0), then R ⋅ V = 0, and F ⋅ V = Fxv, where Fx is the component of F along the X-axis, so F x v = m v ˙ v . {\displaystyle F_{x}v=m{\dot {v}}v.} Integration of both sides yields ∫ t 1 t 2 F x v d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}F_{x}vdt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} If Fx is constant along the trajectory, then the integral of velocity is distance, so F x ( d ( t 2 ) − d ( t 1 ) ) = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle F_{x}(d(t_{2})-d(t_{1}))={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} As an example consider a car skidding to a stop, where k is the coefficient of friction and w is the weight of the car. Then the force along the trajectory is Fx = −kw. The velocity v of the car can be determined from the length s of the skid using the work–energy principle, k w s = w 2 g v 2 , or v = 2 k s g . {\displaystyle kws={\frac {w}{2g}}v^{2},\quad {\text{or}}\quad v={\sqrt {2ksg}}.} This formula uses the fact that the mass of the vehicle is m = w/g. === Coasting down an inclined surface (gravity racing) === Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V, of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be X(t) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F = (0, 0, w), while the force of the road on the vehicle is the constraint force R. Newton's second law yields, F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} The scalar product of this equation with the velocity, V = (vx, vy, vz), yields w v z = m V ˙ V , {\displaystyle wv_{z}=m{\dot {V}}V,} where V is the magnitude of V. The constraint forces between the vehicle and the road cancel from this equation because R ⋅ V = 0, which means they do no work. Integrate both sides to obtain ∫ t 1 t 2 w v z d t = m 2 V 2 ( t 2 ) − m 2 V 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}wv_{z}dt={\frac {m}{2}}V^{2}(t_{2})-{\frac {m}{2}}V^{2}(t_{1}).} The weight force w is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, w Δ z = m 2 V 2 . {\displaystyle w\Delta z={\frac {m}{2}}V^{2}.} Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least s = Δ z 0.06 = 8.3 V 2 g , or s = 8.3 88 2 32.2 ≈ 2000 f t . {\displaystyle s={\frac {\Delta z}{0.06}}=8.3{\frac {V^{2}}{g}},\quad {\text{or}}\quad s=8.3{\frac {88^{2}}{32.2}}\approx 2000\mathrm {ft} .} This formula uses the fact that the weight of the vehicle is w = mg. == Work of forces acting on a rigid body == The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body. The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n . {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n.} The velocity of the points Xi along their trajectories are V i = ω × ( X i − d ) + d ˙ , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }},} where ω is the angular velocity vector obtained from the skew symmetric matrix [ Ω ] = A ˙ A T , {\displaystyle [\Omega ]={\dot {A}}A^{\mathsf {T}},} known as the angular velocity matrix. The small amount of work by the forces over the small displacements δri can be determined by approximating the displacement by δr = vδt so δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + … + F n ⋅ V n δ t {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\ldots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t} or δ W = ∑ i = 1 n F i ⋅ ( ω × ( X i − d ) + d ˙ ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }})\delta t.} This formula can be rewritten to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d ˙ δ t + ( ∑ i = 1 n ( X i − d ) × F i ) ⋅ ω δ t = ( F ⋅ d ˙ + T ⋅ ω ) δ t , {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot {\dot {\mathbf {d} }}\delta t+\left(\sum _{i=1}^{n}\left(\mathbf {X} _{i}-\mathbf {d} \right)\times \mathbf {F} _{i}\right)\cdot {\boldsymbol {\omega }}\delta t=\left(\mathbf {F} \cdot {\dot {\mathbf {d} }}+\mathbf {T} \cdot {\boldsymbol {\omega }}\right)\delta t,} where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body. == References == == Bibliography == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd ed., extended version ed.). W. H. Freeman. ISBN 0-87901-432-6. == External links == Work–energy principle
Wikipedia/Work_(physics)
In optics, the corpuscular theory of light states that light is made up of small discrete particles called "corpuscles" (little particles) which travel in a straight line with a finite velocity and possess impetus. This notion was based on an alternate description of atomism of the time period. Isaac Newton laid the foundations for this theory through his work in optics. This early conception of the particle theory of light was an early forerunner to the modern understanding of the photon. This theory came to dominate the conceptions of light in the eighteenth century, displacing the previously prominent vibration theories, where light was viewed as "pressure" of the medium between the source and the receiver, first championed by René Descartes, and later in a more refined form by Christiaan Huygens. In part correct, being able to successfully explain refraction, reflection, rectilinear propagation and to a lesser extent diffraction, the theory would fall out of favor in the early nineteenth century, as the wave theory of light amassed new experimental evidence. The modern understanding of light is the concept of wave-particle duality. == Mechanical philosophy == In the early 17th century, natural philosophers began to develop new ways to understand nature gradually replacing Aristotelianism, which had been for centuries the dominant scientific theory, during the process known as the Scientific Revolution. Various European philosophers adopted what came to be known as mechanical philosophy sometime between around 1610 to 1650, which described the universe and its contents as a kind of large-scale mechanism, a philosophy that explained the universe is made with matter and motion. This mechanical philosophy was based on Epicureanism, and the work of Leucippus and his pupil Democritus and their atomism, in which everything in the universe, including a person's body, mind, soul and even thoughts, was made of atoms; very small particles of moving matter. During the early part of the 17th century, the atomistic portion of mechanical philosophy was largely developed by Gassendi, René Descartes and other atomists. == Pierre Gassendi's atomist matter theory == The core of Pierre Gassendi's philosophy is his atomist matter theory. In his work, Syntagma Philosophicum, ("Philosophical Treatise"), published posthumously in 1658, Gassendi tried to explain aspects of matter and natural phenomena of the world in terms of atoms and the void. He took Epicurean atomism and modified it to be compatible with Christian theology, by suggesting God created a finite number of indivisible and moving atoms, and has a continuing divine relationship to creation (of matter). Gassendi thought that atoms move in an empty space, classically known as the void, which contradicts the Aristotelian view that the universe is fully made of matter. Gassendi also suggests that information gathered by the human senses has a material form, especially in the case of vision. == Corpuscular theories == Corpuscular theories, or corpuscularianism, are similar to the theories of atomism, except that in atomism the atoms were supposed to be indivisible, whereas corpuscles could in principle be divided. Corpuscles are single, infinitesimally small, particles that have shape, size, color, and other physical properties that alter their functions and effects in phenomena in the mechanical and biological sciences. This later led to the modern idea that compounds have secondary properties different from the elements of those compounds. Gassendi asserts that corpuscles are particles that carry other substances and are of different types. These corpuscles are also emissions from various sources such as solar entities, animals, or plants. Robert Boyle was a strong proponent of corpuscularianism and used the theory to exemplify the differences between a vacuum and a plenum, by which he aimed to further support his mechanical philosophy and overall atomist theory. About a half-century after Gassendi, Isaac Newton used existing corpuscular theories to develop his particle theory of the physics of light. == Isaac Newton == Isaac Newton worked on optics throughout his research career, conducting various experiments and developing hypotheses to explain his results. He dismissed Descartes' theory of light because he rejected Descartes’ understanding of space, which derived from it. With the publication of Opticks in 1704, Newton for the first time took a clear position supporting a corpuscular interpretation, though it would fall on his followers to systemise the theory. In the 1718 edition of Opticks, Newton added several uncertain hypotheses about the nature of light, formulated as queries. In query (Qu.) 16, he wondered whether the way a quavering motion of a finger pressing against the bottom of the eye causes the sensation of circles of colour is similar to how light affects the retina, and whether the independent continuation of the induced sensation for about a second indicates a vibrating nature of the motions in the eye. In Qu. 17, Newton compared the vibrations to the waves propagating in concentric circles after a stone has been thrown in water, and to "the Vibrations or Tremors excited in the Air by percussion". He therefore proposed that light rays would similarly excite waves of vibrations in a reflecting or refracting medium, which in turn could overtake the rays of light and alternately accelerate and retard them. Newton then suggested in Qu. 18 and Qu. 19 that light propagates through vacuum via a very subtle "Aethereal Medium", just like heat was thought to spread. Although the previous hypotheses describe wave-like aspects of light, Newton still believed in particle-like properties. In Qu. 28, he asked: "Are not all Hypotheses erroneous in which Light is supposed to consist in Pression or Motion propagated through a fluid Medium." He did not believe the arguments explained the proposed new modifications of rays, and stressed how pression and motion would not propagate through fluid in straight lines beyond obstacles as light rays do. In Qu. 29, he wondered: "Are not the Rays of Light very small Bodies emitted from shining Substances? For such Bodies will pass through uniform Mediums in right Lines without bending into the Shadow, which is the Nature of the Rays of Light. They will also be capable of several Properties, and be able to conserve their Properties unchanged in passing through several Mediums, which is another Condition of the Rays of Light." He connected these properties to several effects of the interaction of light rays with matter and vacuum. Newton's corpuscular theory was an elaboration of his view of reality as interactions of material points through forces. Note Albert Einstein's description of Newton's conception of physical reality: [Newton's] physical reality is characterised by concepts of space, time, the material point and force (interaction between material points). Physical events are to be thought of as movements according to the law of material points in space. The material point is the only representative of reality in so far as it is subject to change. The concept of the material point is obviously due to observable bodies; one conceived of the material point on the analogy of movable bodies by omitting characteristics of extension, form, spatial locality, and all their 'inner' qualities, retaining only inertia, translation, and the additional concept of force. == Polarization == The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time polarization was considered proof of the particle theory. Nowadays, polarisation is considered a property of waves and may only manifest in transverse waves. Longitudinal waves may not be polarised. == End of corpuscular theory == The dominance of Newtonian natural philosophy in the eighteenth century was one of the decisive factors ensuring the prevalence of the corpuscular theory of light. Newtonians maintained that the corpuscles of light were projectiles that travelled from the source to the receiver with a finite speed. In this description, the propagation of light is transportation of matter. However by the turn of the century, beginning with Thomas Young's double-slit experiment in 1801, more evidence in the form of novel experiments on diffraction, interference, and polarization showcased issues with the theory. A wave theory based on Young, Augustin-Jean Fresnel and François Arago's work would materialise in a novel wave theory of light. == Quantum mechanics == The notions of light as a particle resurfaced in the 20th century with the photoelectric effect. In 1905, Albert Einstein explained this effect by introducing the concept of light quanta or photons. Quantum particles are considered to have wave–particle duality. In quantum field theory, photons are explained as excitations of the electromagnetic field using second quantization. == See also == Corpuscularianism Speed of gravity Photon Philosophy of physics Opticks by Isaac Newton The Skeptical Chemist by Robert Boyle == References == == External links == Observing the quantum behavior of light in an undergraduate laboratory JJ Thorn et al.: Am. J. Phys. 72, 1210-1219 (2004) Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light. Sir Isaack Newton. 1704. Project Gutenberg book released 23 August 2010. Pierre Gassendi. Fisher, Saul. 2009. Stanford Encyclopedia of Philosophy. Isaac Newton. Smith, George. 2007. Stanford Encyclopedia of Philosophy. Robert Boyle. MacIntosh, J.J. 2010. Stanford Encyclopedia of Philosophy. YouTube video. Physics - Newton's corpuscular theory of light - Science. elearnin. Uploaded 5 Jan 2013. Robert Hooke's Critique of Newton's Theory of Light and Colors (delivered 1672) Robert Hooke. Thomas Birch, The History of the Royal Society, vol. 3 (London: 1757), pp. 10–15. Newton Project, University of Sussex. Corpuscule or Wave. Arman Kashef. 2022. Xaporia: The Free and Independent Blog.
Wikipedia/Corpuscular_theory_of_light
In symbolic computation, the Risch algorithm is a method of indefinite integration used in some computer algebra systems to find antiderivatives. It is named after the American mathematician Robert Henry Risch, a specialist in computer algebra who developed it in 1968. The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational functions, radicals, logarithms, and exponential functions. Risch called it a decision procedure, because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral. However, the algorithm does not always succeed in identifying whether or not the antiderivative of a given function in fact can be expressed in terms of elementary functions. The complete description of the Risch algorithm takes over 100 pages. The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman. Some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Brian L. Miller. == Description == The Risch algorithm is used to integrate elementary functions. These are functions obtained by composing exponentials, logarithms, radicals, trigonometric functions, and the four arithmetic operations (+ − × ÷). Laplace solved this problem for the case of rational functions, as he showed that the indefinite integral of a rational function is a rational function and a finite number of constant multiples of logarithms of rational functions . The algorithm suggested by Laplace is usually described in calculus textbooks; as a computer program, it was finally implemented in the 1960s. Liouville formulated the problem that is solved by the Risch algorithm. Liouville proved by analytical means that if there is an elementary solution g to the equation g′ = f then there exist constants αi and functions ui and v in the field generated by f such that the solution is of the form g = v + ∑ i < n α i ln ⁡ ( u i ) {\displaystyle g=v+\sum _{i<n}\alpha _{i}\ln(u_{i})} Risch developed a method that allows one to consider only a finite set of functions of Liouville's form. The intuition for the Risch algorithm comes from the behavior of the exponential and logarithm functions under differentiation. For the function f eg, where f and g are differentiable functions, we have ( f ⋅ e g ) ′ = ( f ′ + f ⋅ g ′ ) ⋅ e g , {\displaystyle \left(f\cdot e^{g}\right)^{\prime }=\left(f^{\prime }+f\cdot g^{\prime }\right)\cdot e^{g},\,} so if eg were in the result of an indefinite integration, it should be expected to be inside the integral. Also, as ( f ⋅ ( ln ⁡ g ) n ) ′ = f ′ ( ln ⁡ g ) n + n f g ′ g ( ln ⁡ g ) n − 1 {\displaystyle \left(f\cdot (\ln g)^{n}\right)^{\prime }=f^{\prime }\left(\ln g\right)^{n}+nf{\frac {g^{\prime }}{g}}\left(\ln g\right)^{n-1}} then if (ln g)n were in the result of an integration, then only a few powers of the logarithm should be expected. == Problem examples == Finding an elementary antiderivative is very sensitive to details. For instance, the following algebraic function (posted to sci.math.symbolic by Henri Cohen in 1993) has an elementary antiderivative, as Wolfram Mathematica since version 13 shows (however, Mathematica does not use the Risch algorithm to compute this integral): f ( x ) = x x 4 + 10 x 2 − 96 x − 71 , {\displaystyle f(x)={\frac {x}{\sqrt {x^{4}+10x^{2}-96x-71}}},} namely: F ( x ) = − 1 8 ln ( ( x 6 + 15 x 4 − 80 x 3 + 27 x 2 − 528 x + 781 ) x 4 + 10 x 2 − 96 x − 71 − ( x 8 + 20 x 6 − 128 x 5 + 54 x 4 − 1408 x 3 + 3124 x 2 + 10001 ) ) + C . {\displaystyle {\begin{aligned}F(x)=-{\frac {1}{8}}\ln &\,{\Big (}(x^{6}+15x^{4}-80x^{3}+27x^{2}-528x+781){\sqrt {x^{4}+10x^{2}-96x-71}}{\Big .}\\&{}-{\Big .}(x^{8}+20x^{6}-128x^{5}+54x^{4}-1408x^{3}+3124x^{2}+10001){\Big )}+C.\end{aligned}}} But if the constant term 71 is changed to 72, it is not possible to represent the antiderivative in terms of elementary functions, as FriCAS also shows. Some computer algebra systems may here return an antiderivative in terms of non-elementary functions (i.e. elliptic integrals), which are outside the scope of the Risch algorithm. For example, Mathematica returns a result with the functions EllipticPi and EllipticF. Integrals in the form ∫ x + A x 4 + a x 3 + b x 2 + c x + d d x {\displaystyle \int {\frac {x+A}{\sqrt {x^{4}+ax^{3}+bx^{2}+cx+d}}}\,dx} were solved by Chebyshev (and in what cases it is elementary), but the strict proof for it was ultimately done by Zolotarev. The following is a more complex example that involves both algebraic and transcendental functions: f ( x ) = x 2 + 2 x + 1 + ( 3 x + 1 ) x + ln ⁡ x x x + ln ⁡ x ( x + x + ln ⁡ x ) . {\displaystyle f(x)={\frac {x^{2}+2x+1+(3x+1){\sqrt {x+\ln x}}}{x\,{\sqrt {x+\ln x}}\left(x+{\sqrt {x+\ln x}}\right)}}.} In fact, the antiderivative of this function has a fairly short form that can be found using substitution u = x + x + ln ⁡ x {\displaystyle u=x+{\sqrt {x+\ln x}}} (SymPy can solve it while FriCAS fails with "implementation incomplete (constant residues)" error in Risch algorithm): F ( x ) = 2 ( x + ln ⁡ x + ln ⁡ ( x + x + ln ⁡ x ) ) + C . {\displaystyle F(x)=2\left({\sqrt {x+\ln x}}+\ln \left(x+{\sqrt {x+\ln x}}\right)\right)+C.} Some Davenport "theorems" are still being clarified. For example in 2020 a counterexample to such a "theorem" was found, where it turns out that an elementary antiderivative exists after all. == Implementation == Transforming Risch's theoretical algorithm into an algorithm that can be effectively executed by a computer was a complex task which took a long time. The case of the purely transcendental functions (which do not involve roots of polynomials) is relatively easy and was implemented early in most computer algebra systems. The first implementation was done by Joel Moses in Macsyma soon after the publication of Risch's paper. The case of purely algebraic functions was partially solved and implemented in Reduce by James H. Davenport – for simplicity it could only deal with square roots and repeated square roots and not general radicals or other non-quadratic algebraic relations between variables. The general case was solved and almost fully implemented in Scratchpad, a precursor of Axiom, by Manuel Bronstein, there is Axiom's fork FriCAS, with active Risch and other algorithm development on github. However, the implementation did not include some of the branches for special cases completely. Currently in 2025, there is no known full implementation of the Risch algorithm. == Decidability == The Risch algorithm applied to general elementary functions is not an algorithm but a semi-algorithm because it needs to check, as a part of its operation, if certain expressions are equivalent to zero (constant problem), in particular in the constant field. For expressions that involve only functions commonly taken to be elementary it is not known whether an algorithm performing such a check exists (current computer algebra systems use heuristics); moreover, if one adds the absolute value function to the list of elementary functions, then it is known that no such algorithm exists; see Richardson's theorem. This issue also arises in the polynomial division algorithm; this algorithm will fail if it cannot correctly determine whether coefficients vanish identically. Virtually every non-trivial algorithm relating to polynomials uses the polynomial division algorithm, the Risch algorithm included. If the constant field is computable, i.e., for elements not dependent on x, then the problem of zero-equivalence is decidable, so the Risch algorithm is a complete algorithm. Examples of computable constant fields are ℚ and ℚ(y), i.e., rational numbers and rational functions in y with rational-number coefficients, respectively, where y is an indeterminate that does not depend on x. This is also an issue in the Gaussian elimination matrix algorithm (or any algorithm that can compute the nullspace of a matrix), which is also necessary for many parts of the Risch algorithm. Gaussian elimination will produce incorrect results if it cannot correctly determine whether a pivot is identically zero. == See also == Axiom (computer algebra system) Closed-form expression Incomplete gamma function Lists of integrals Liouville's theorem (differential algebra) Nonelementary integral Symbolic integration == Notes == == References == Bronstein, Manuel (1990). "Integration of elementary functions". Journal of Symbolic Computation. 9 (2): 117–173. doi:10.1016/s0747-7171(08)80027-2. Bronstein, Manuel (1998). "Symbolic Integration Tutorial" (PDF). ISSAC'98, Rostock (August 1998) and Differential Algebra Workshop, Rutgers. Bronstein, Manuel (2005). Symbolic Integration I. Springer. ISBN 3-540-21493-3. Davenport, James H. (1981). On the integration of algebraic functions. Lecture Notes in Computer Science. Vol. 102. Springer. ISBN 978-3-540-10290-8. Geddes, Keith O.; Czapor, Stephen R.; Labahn, George (1992). Algorithms for computer algebra. Boston, MA: Kluwer Academic Publishers. pp. xxii+585. Bibcode:1992afca.book.....G. doi:10.1007/b102438. ISBN 0-7923-9259-0. Moses, Joel (2012). "Macsyma: A personal history". Journal of Symbolic Computation. 47 (2): 123–130. doi:10.1016/j.jsc.2010.08.018. Risch, R. H. (1969). "The problem of integration in finite terms". Transactions of the American Mathematical Society. 139. American Mathematical Society: 167–189. doi:10.2307/1995313. JSTOR 1995313. Risch, R. H. (1970). "The solution of the problem of integration in finite terms". Bulletin of the American Mathematical Society. 76 (3): 605–608. doi:10.1090/S0002-9904-1970-12454-5. Rosenlicht, Maxwell (1972). "Integration in finite terms". American Mathematical Monthly. 79 (9). Mathematical Association of America: 963–972. doi:10.2307/2318066. JSTOR 2318066. == External links == Bhatt, Bhuvanesh. "Risch Algorithm". MathWorld.
Wikipedia/Risch_algorithm
In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform. == General form == An integral transform is any transform T {\displaystyle T} of the following form: ( T f ) ( u ) = ∫ t 1 t 2 f ( t ) K ( t , u ) d t {\displaystyle (Tf)(u)=\int _{t_{1}}^{t_{2}}f(t)\,K(t,u)\,dt} The input of this transform is a function f {\displaystyle f} , and the output is another function T f {\displaystyle Tf} . An integral transform is a particular kind of mathematical operator. There are numerous useful integral transforms. Each is specified by a choice of the function K {\displaystyle K} of two variables, that is called the kernel or nucleus of the transform. Some kernels have an associated inverse kernel K − 1 ( u , t ) {\displaystyle K^{-1}(u,t)} which (roughly speaking) yields an inverse transform: f ( t ) = ∫ u 1 u 2 ( T f ) ( u ) K − 1 ( u , t ) d u {\displaystyle f(t)=\int _{u_{1}}^{u_{2}}(Tf)(u)\,K^{-1}(u,t)\,du} A symmetric kernel is one that is unchanged when the two variables are permuted; it is a kernel function K {\displaystyle K} such that K ( t , u ) = K ( u , t ) {\displaystyle K(t,u)=K(u,t)} . In the theory of integral equations, symmetric kernels correspond to self-adjoint operators. == Motivation == There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform. There are many applications of probability that rely on integral transforms, such as "pricing kernel" or stochastic discount factor, or the smoothing of data recovered from robust statistics; see kernel (statistics). == History == The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals. Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis. == Usage example == As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond to power series in the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain. The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifted damped sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines. Another usage example is the kernel in the path integral: ψ ( x , t ) = ∫ − ∞ ∞ ψ ( x ′ , t ′ ) K ( x , t ; x ′ , t ′ ) d x ′ . {\displaystyle \psi (x,t)=\int _{-\infty }^{\infty }\psi (x',t')K(x,t;x',t')dx'.} This states that the total amplitude ψ ( x , t ) {\displaystyle \psi (x,t)} to arrive at ( x , t ) {\displaystyle (x,t)} is the sum (the integral) over all possible values x ′ {\displaystyle x'} of the total amplitude ψ ( x ′ , t ′ ) {\displaystyle \psi (x',t')} to arrive at the point ( x ′ , t ′ ) {\displaystyle (x',t')} multiplied by the amplitude to go from x ′ {\displaystyle x'} to x {\displaystyle x} [i.e. K ( x , t ; x ′ , t ′ ) {\displaystyle K(x,t;x',t')} ]. It is often referred to as the propagator for a given system. This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel. == Table of transforms == In the limits of integration for the inverse transform, c is a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform, c must be greater than the largest real part of the zeroes of the transform function. Note that there are alternative notations and conventions for the Fourier transform. == Different domains == Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group. If instead one uses functions on the circle (periodic functions), integration kernels are then biperiodic functions; convolution by functions on the circle yields circular convolution. If one uses functions on the cyclic group of order n (Cn or Z/nZ), one obtains n × n matrices as integration kernels; convolution corresponds to circulant matrices. == General theory == Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is a linear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be a generalized function then all linear operators are integral transforms (a properly formulated version of this statement is the Schwartz kernel theorem). The general theory of such integral equations is known as Fredholm theory. In this theory, the kernel is understood to be a compact operator acting on a Banach space of functions. Depending on the situation, the kernel is then variously referred to as the Fredholm operator, the nuclear operator or the Fredholm kernel. == See also == == References == == Further reading == A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4 R. K. M. Thambynayagam, The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill, New York, 2011. ISBN 978-0-07-175184-1 "Integral transform", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
Wikipedia/Integral_transform
Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers. The conjecture has been shown to hold for all integers less than 4×1018 but remains unproven despite considerable effort. == History == === Origins === On 7 June 1742, the Prussian mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture: Goldbach was following the now-abandoned convention of considering 1 to be a prime number, so that a sum of units would be a sum of primes. He then proposed a second conjecture in the margin of his letter, which implies the first: Es scheinet wenigstens, dass eine jede Zahl, die grösser ist als 2, ein aggregatum trium numerorum primorum sey. It seems at least, that every integer greater than 2 can be written as the sum of three primes. Euler replied in a letter dated 30 June 1742 and reminded Goldbach of an earlier conversation they had had ("... so Ew vormals mit mir communicirt haben ..."), in which Goldbach had remarked that the first of those two conjectures would follow from the statement This is in fact equivalent to his second, marginal conjecture. In the letter dated 30 June 1742, Euler stated: Dass ... ein jeder numerus par eine summa duorum primorum sey, halte ich für ein ganz gewisses theorema, ungeachtet ich dasselbe nicht demonstriren kann.That ... every even integer is a sum of two primes, I regard as a completely certain theorem, although I cannot prove it. === Similar conjecture by Descartes === René Descartes wrote that "Every even number can be expressed as the sum of at most three primes." The proposition is similar to, but weaker than, Goldbach's conjecture. Paul Erdős said that "Descartes actually discovered this before Goldbach... but it is better that the conjecture was named for Goldbach because, mathematically speaking, Descartes was infinitely rich and Goldbach was very poor." === Partial results === The strong Goldbach conjecture is much more difficult than the weak Goldbach conjecture, which says that every odd integer greater than 5 is the sum of three primes. Using Vinogradov's method, Nikolai Chudakov, Johannes van der Corput, and Theodor Estermann showed (1937–1938) that almost all even numbers can be written as the sum of two primes (in the sense that the fraction of even numbers up to some N which can be so written tends towards 1 as N increases). In 1930, Lev Schnirelmann proved that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant; see Schnirelmann density. Schnirelmann's constant is the lowest number C with this property. Schnirelmann himself obtained C < 800000. This result was subsequently enhanced by many authors, such as Olivier Ramaré, who in 1995 showed that every even number n ≥ 4 is in fact the sum of at most 6 primes. The best known result currently stems from the proof of the weak Goldbach conjecture by Harald Helfgott, which directly implies that every even number n ≥ 4 is the sum of at most 4 primes. In 1924, Hardy and Littlewood showed under the assumption of the generalized Riemann hypothesis that the number of even numbers up to X violating the Goldbach conjecture is much less than X1⁄2 + c for small c. In 1948, using sieve theory methods, Alfréd Rényi showed that every sufficiently large even number can be written as the sum of a prime and an almost prime with at most K factors. Chen Jingrun showed in 1973 using sieve theory that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes). See Chen's theorem for further information. In 1975, Hugh Lowell Montgomery and Bob Vaughan showed that "most" even numbers are expressible as the sum of two primes. More precisely, they showed that there exist positive constants c and C such that for all sufficiently large numbers N, every even number less than N is the sum of two primes, with at most CN1 − c exceptions. In particular, the set of even integers that are not the sum of two primes has density zero. In 1951, Yuri Linnik proved the existence of a constant K such that every sufficiently large even number is the sum of two primes and at most K powers of 2. János Pintz and Imre Ruzsa found in 2020 that K = 8 works. Assuming the generalized Riemann hypothesis, K = 7 also works, as shown by Roger Heath-Brown and Jan-Christoph Schlage-Puchta in 2002. A proof for the weak conjecture was submitted in 2013 by Harald Helfgott to Annals of Mathematics Studies series. Although the article was accepted, Helfgott decided to undertake the major modifications suggested by the referee. Despite several revisions, Helfgott's proof has not yet appeared in a peer-reviewed publication. The weak conjecture is implied by the strong conjecture, as if n − 3 is a sum of two primes, then n is a sum of three primes. However, the converse implication and thus the strong Goldbach conjecture would remain unproven if Helfgott's proof is correct. === Computational results === For small values of n, the strong Goldbach conjecture (and hence the weak Goldbach conjecture) can be verified directly. For instance, in 1938, Nils Pipping laboriously verified the conjecture up to n = 100000. With the advent of computers, many more values of n have been checked; T. Oliveira e Silva ran a distributed computer search that has verified the conjecture for n ≤ 4×1018 (and double-checked up to 4×1017) as of 2013. One record from this search is that 3325581707333960528 is the smallest number that cannot be written as a sum of two primes where one is smaller than 9781. === In popular culture === Goldbach's Conjecture (Chinese: 哥德巴赫猜想) is the title of the biography of Chinese mathematician and number theorist Chen Jingrun, written by Xu Chi. The conjecture is a central point in the plot of the 1992 novel Uncle Petros and Goldbach's Conjecture by Greek author Apostolos Doxiadis, in the short story "Sixty Million Trillion Combinations" by Isaac Asimov and also in the 2008 mystery novel No One You Know by Michelle Richmond. Goldbach's conjecture is part of the plot of the 2007 Spanish film Fermat's Room. Goldbach's conjecture is featured as the main topic of research of the titular character Marguerite in the 2023 French-Swiss film Marguerite's Theorem. == Formal statement == Each of the three conjectures has a natural analog in terms of the modern definition of a prime, under which 1 is excluded. A modern version of the first conjecture is: A modern version of the marginal conjecture is: And a modern version of Goldbach's older conjecture of which Euler reminded him is: These modern versions might not be entirely equivalent to the corresponding original statements. For example, if there were an even integer N = p + 1 larger than 4, for p a prime, that could not be expressed as the sum of two primes in the modern sense, then it would be a counterexample to the modern version of the third conjecture (without being a counterexample to the original version). The modern version is thus probably stronger (but in order to confirm that, one would have to prove that the first version, freely applied to any positive even integer n, could not possibly rule out the existence of such a specific counterexample N). In any case, the modern statements have the same relationships with each other as the older statements did. That is, the second and third modern statements are equivalent, and either implies the first modern statement. The third modern statement (equivalent to the second) is the form in which the conjecture is usually expressed today. It is also known as the "strong", "even", or "binary" Goldbach conjecture. A weaker form of the second modern statement, known as "Goldbach's weak conjecture", the "odd Goldbach conjecture", or the "ternary Goldbach conjecture", asserts that == Heuristic justification == Statistical considerations that focus on the probabilistic distribution of prime numbers present informal evidence in favour of the conjecture (in both the weak and strong forms) for sufficiently large integers: the greater the integer, the more ways there are available for that number to be represented as the sum of two or three other numbers, and the more "likely" it becomes that at least one of these representations consists entirely of primes. A very crude version of the heuristic probabilistic argument (for the strong form of the Goldbach conjecture) is as follows. The prime number theorem asserts that an integer m selected at random has roughly a ⁠1/ln m⁠ chance of being prime. Thus if n is a large even integer and m is a number between 3 and ⁠n/2⁠, then one might expect the probability of m and n − m simultaneously being prime to be ⁠1/ln m ln(n − m)⁠. If one pursues this heuristic, one might expect the total number of ways to write a large even integer n as the sum of two odd primes to be roughly ∑ m = 3 n 2 1 ln ⁡ m 1 ln ⁡ ( n − m ) ≈ n 2 ( ln ⁡ n ) 2 . {\displaystyle \sum _{m=3}^{\frac {n}{2}}{\frac {1}{\ln m}}{\frac {1}{\ln(n-m)}}\approx {\frac {n}{2(\ln n)^{2}}}.} Since ln n ≪ √n, this quantity goes to infinity as n increases, and one would expect that every large even integer has not just one representation as the sum of two primes, but in fact very many such representations. This heuristic argument is actually somewhat inaccurate because it assumes that the events of m and n − m being prime are statistically independent of each other. For instance, if m is odd, then n − m is also odd, and if m is even, then n − m is even, a non-trivial relation because, besides the number 2, only odd numbers can be prime. Similarly, if n is divisible by 3, and m was already a prime other than 3, then n − m would also be coprime to 3 and thus be slightly more likely to be prime than a general number. Pursuing this type of analysis more carefully, G. H. Hardy and John Edensor Littlewood in 1923 conjectured (as part of their Hardy–Littlewood prime tuple conjecture) that for any fixed c ≥ 2, the number of representations of a large integer n as the sum of c primes n = p1 + ⋯ + pc with p1 ≤ ⋯ ≤ pc should be asymptotically equal to ( ∏ p p γ c , p ( n ) ( p − 1 ) c ) ∫ 2 ≤ x 1 ≤ ⋯ ≤ x c : x 1 + ⋯ + x c = n d x 1 ⋯ d x c − 1 ln ⁡ x 1 ⋯ ln ⁡ x c , {\displaystyle \left(\prod _{p}{\frac {p\gamma _{c,p}(n)}{(p-1)^{c}}}\right)\int _{2\leq x_{1}\leq \cdots \leq x_{c}:x_{1}+\cdots +x_{c}=n}{\frac {dx_{1}\cdots dx_{c-1}}{\ln x_{1}\cdots \ln x_{c}}},} where the product is over all primes p, and γc,p(n) is the number of solutions to the equation n = q1 + ⋯ + qc mod p in modular arithmetic, subject to the constraints q1, …, qc ≠ 0 mod p. This formula has been rigorously proven to be asymptotically valid for c ≥ 3 from the work of Ivan Matveevich Vinogradov, but is still only a conjecture when c = 2. In the latter case, the above formula simplifies to 0 when n is odd, and to 2 Π 2 ( ∏ p ∣ n ; p ≥ 3 p − 1 p − 2 ) ∫ 2 n d x ( ln ⁡ x ) 2 ≈ 2 Π 2 ( ∏ p ∣ n ; p ≥ 3 p − 1 p − 2 ) n ( ln ⁡ n ) 2 {\displaystyle 2\Pi _{2}\left(\prod _{p\mid n;p\geq 3}{\frac {p-1}{p-2}}\right)\int _{2}^{n}{\frac {dx}{(\ln x)^{2}}}\approx 2\Pi _{2}\left(\prod _{p\mid n;p\geq 3}{\frac {p-1}{p-2}}\right){\frac {n}{(\ln n)^{2}}}} when n is even, where Π2 is Hardy–Littlewood's twin prime constant Π 2 := ∏ p p r i m e ≥ 3 ( 1 − 1 ( p − 1 ) 2 ) ≈ 0.66016 18158 46869 57392 78121 10014 … {\displaystyle \Pi _{2}:=\prod _{p\;{\rm {prime}}\geq 3}\left(1-{\frac {1}{(p-1)^{2}}}\right)\approx 0.66016\,18158\,46869\,57392\,78121\,10014\dots } This is sometimes known as the extended Goldbach conjecture. The strong Goldbach conjecture is in fact very similar to the twin prime conjecture, and the two conjectures are believed to be of roughly comparable difficulty. == Goldbach partition function == Two primes that sum to an even integer are called a Goldbach Partition. The Goldbach partition function is the function that associates to each even integer the number of ways it can be decomposed into a sum of two primes. Its graph looks like a comet and is therefore called Goldbach's comet. Goldbach's comet suggests tight upper and lower bounds on the number of representations of an even number as the sum of two primes, and also that the number of these representations depend strongly on the value modulo 3 of the number. == Related problems == Although Goldbach's conjecture implies that every positive integer greater than one can be written as a sum of at most three primes, it is not always possible to find such a sum using a greedy algorithm that uses the largest possible prime at each step. The Pillai sequence tracks the numbers requiring the largest number of primes in their greedy representations. Similar problems to Goldbach's conjecture exist in which primes are replaced by other particular sets of numbers, such as the squares: It was proven by Lagrange that every positive integer is the sum of four squares. See Waring's problem and the related Waring–Goldbach problem on sums of powers of primes. Hardy and Littlewood listed as their Conjecture I: "Every large odd number (n > 5) is the sum of a prime and the double of a prime". This conjecture is known as Lemoine's conjecture and is also called Levy's conjecture. The Goldbach conjecture for practical numbers, a prime-like sequence of integers, was stated by Margenstern in 1984, and proved by Melfi in 1996: every even number is a sum of two practical numbers. Harvey Dubner proposed a strengthening of the Goldbach conjecture that states that every even integer greater than 4208 is the sum of two twin primes (not necessarily belonging to the same pair). Only 34 even integers less than 4208 are not the sum of two twin primes; Dubner has verified computationally that this list is complete up to 2 ⋅ 10 10 . {\displaystyle 2\cdot 10^{10}.} A proof of this stronger conjecture would not only imply Goldbach's conjecture, but also the twin prime conjecture. According to Bertrand's postulate, for every integer n > 1 {\displaystyle n>1} , there is always at least one prime p {\displaystyle p} such that n < p < 2 n . {\displaystyle n<p<2n.} If the postulate were false, there would exist some integer n {\displaystyle n} for which no prime numbers lie between n {\displaystyle n} and 2 n {\displaystyle 2n} , making it impossible to express 2 n {\displaystyle 2n} as a sum of two primes. Goldbach's conjecture is used when studying computational complexity. The connection is made through the Busy Beaver function, where BB(n) is the maximum number of steps taken by any n state Turing machine that halts. There is a 27-state Turing machine that halts if and only if Goldbach's conjecture is false. Hence if BB(27) was known, and the Turing machine did not stop in that number of steps, it would be known to run forever and hence no counterexamples exist (which proves the conjecture true). This is a completely impractical way to settle the conjecture; instead it is used to suggest that BB(27) will be very hard to compute, at least as difficult as settling the Goldbach conjecture. == References == == Further reading == Deshouillers, J.-M.; Effinger, G.; te Riele, H.; Zinoviev, D. (1997). "A complete Vinogradov 3-primes theorem under the Riemann hypothesis" (PDF). Electronic Research Announcements of the American Mathematical Society. 3 (15): 99–104. doi:10.1090/S1079-6762-97-00031-0. Montgomery, H. L.; Vaughan, R. C. (1975). "The exceptional set in Goldbach's problem" (PDF). Acta Arithmetica. 27: 353–370. doi:10.4064/aa-27-1-353-370. Terence Tao proved that all odd numbers are at most the sum of five primes. Goldbach Conjecture at MathWorld. == External links == Media related to Goldbach's conjecture at Wikimedia Commons "Goldbach problem", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Goldbach's original letter to Euler — PDF format (in German and Latin) Goldbach's conjecture, part of Chris Caldwell's Prime Pages. Goldbach conjecture verification, Tomás Oliveira e Silva's distributed computer search.
Wikipedia/Goldbach_conjecture
In mathematics, a zeta function is (usually) a function analogous to the original example, the Riemann zeta function ζ ( s ) = ∑ n = 1 ∞ 1 n s . {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.} Zeta functions include: Airy zeta function, related to the zeros of the Airy function Arakawa–Kaneko zeta function Arithmetic zeta function Artin–Mazur zeta function of a dynamical system Barnes zeta function or double zeta function Beurling zeta function of Beurling generalized primes Dedekind zeta function of a number field Duursma zeta function of error-correcting codes Epstein zeta function of a quadratic form Goss zeta function of a function field Hasse–Weil zeta function of a variety Height zeta function of a variety Hurwitz zeta function, a generalization of the Riemann zeta function Igusa zeta function Ihara zeta function of a graph L-function, a "twisted" zeta function Lefschetz zeta function of a morphism Lerch zeta function, a generalization of the Riemann zeta function Local zeta function of a characteristic-p variety Matsumoto zeta function Minakshisundaram–Pleijel zeta function of a Laplacian Motivic zeta function of a motive Multiple zeta function, or Mordell–Tornheim zeta function of several variables p-adic zeta function of a p-adic number Prime zeta function, like the Riemann zeta function, but only summed over primes Riemann zeta function, the archetypal example Ruelle zeta function Selberg zeta function of a Riemann surface Shimizu L-function Shintani zeta function Subgroup zeta function Witten zeta function of a Lie group Zeta function of an incidence algebra, a function that maps every interval of a poset to the constant value 1. Despite not resembling a holomorphic function, the special case for the poset of integer divisibility is related as a formal Dirichlet series to the Riemann zeta function. Zeta function of an operator or spectral zeta function == See also == Other functions called zeta functions, but not analogous to the Riemann zeta function Jacobi zeta function Weierstrass zeta function Topics related to zeta functions Artin conjecture Birch and Swinnerton-Dyer conjecture Riemann hypothesis and the generalized Riemann hypothesis. Selberg class S Explicit formulae for L-functions Trace formula == External links == A directory of all known zeta functions
Wikipedia/Zeta_function
The French Academy of Sciences (French: Académie des sciences, [akademi de sjɑ̃s]) is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at the forefront of scientific developments in Europe in the 17th and 18th centuries, and is one of the earliest Academies of Sciences. Currently headed by Patrick Flandrin (President of the academy), it is one of the five Academies of the Institut de France. == History == The Academy of Sciences traces its origin to Colbert's plan to create a general academy. He chose a small group of scholars who met on 22 December 1666 in the King's library, near the present-day Bibliothèque Nationale, and thereafter held twice-weekly working meetings there in the two rooms assigned to the group. The first 30 years of the academy's existence were relatively informal, since no statutes had as yet been laid down for the institution. In contrast to its British counterpart, the academy was founded as an organ of government. In Paris, there were not many membership openings, to fill positions there were contentious elections. The election process was at least a 6-stage process with rules and regulations that allowed for chosen candidates to canvas other members and for current members to consider postponing certain stages of the process if the need would arise. Elections in the early days of the academy were important activities, and as such made up a large part of the proceedings at the academy, with many meetings being held regarding the election to fill a single vacancy within the academy. That is not to say that discussion of candidates and the election process as a whole was relegated to the meetings. Members that belonged to the vacancy's respective field would continue discussion of potential candidates for the vacancy in private. Being elected into the academy did not necessarily guarantee being a full member, in some cases, one would enter the academy as an associate or correspondent before being appointed as a full member of the academy. The election process was originally only to replace members from a specific section. For example, if someone whose study was mathematics was either removed or resigned from his position, the following election process nominated only those whose focus was also mathematics in order to fill that discipline's vacancy. That led to some periods of time in which no specialists for specific fields of study could be found, which left positions in those fields vacant since they could not be filled with people in other disciplines. The needed reform came late in the 20th century, in 1987, when the academy decided against the practice and to begin filling vacancies with people with new disciplines. This reform was not only aimed at further diversifying the disciplines under the academy, but also to help combat the internal aging of the academy itself. The academy was expected to remain apolitical, and to avoid discussion of religious and social issues. On 20 January 1699, Louis XIV gave the Company its first rules. The academy received the name of Royal Academy of Sciences and was installed in the Louvre in Paris. Following this reform, the academy began publishing a volume each year with information on all the work done by its members and obituaries for members who had died. This reform also codified the method by which members of the academy could receive pensions for their work. The academy was originally organized by the royal reform hierarchically into the following groups: Pensionaires, Pupils, Honoraires, and Associés. The reform also added new groups not previously recognized, such as Vétéran. Some of these role's member limits were expanded and some roles even removed or combined throughout the course of academy's history. The Honoraires group establish by this reform in 1699 whose members were directly appointed by the King was recognized until its abolishment in 1793. Membership in the academy the exceeded 100 officially-recognised full members only in 1976, 310 years after the academy's inception in 1666. The membership increase came with a large-scale reorganization in 1976. Under this reorganization, 130 resident members, 160 correspondents, and 80 foreign associates could be elected. A vacancy opens only upon the death of members, as they serve for life. During elections, half of the vacancies are reserved for people less than 55 years old. This was created as an attempt to encourage younger members to join the academy. The reorganization also divided the academy into 2 divisions: One division, Division 1, covers the applications of mathematics and physical sciences, the other, Division 2, covers the applications of chemical, natural, biological, and medical sciences. On 8 August 1793, the National Convention abolished all the academies. On 22 August 1795, a National Institute of Sciences and Arts was put in place, bringing together the old academies of the sciences, literature and arts, among them the Académie française and the Académie des sciences. Also in 1795, The academy determined these 10 titles (first 4 in Division 1 and the others in Division 2) to be their newly accepted branches of scientific study: Mathematics Mechanics Astronomy Physics Chemistry Mineralogy Botany Agriculture Anatomy and Zoology Medicine and Surgery The last two sections are bundled since there were many good candidates fit to be elected for those practices, and the competition was stiff. Some individuals like Francois Magendie had made stellar advancements in their selected fields of study, that warranted a possible addition of new fields. However, even someone like Magendie that had made breakthroughs in Physiology and impressed the academy with his hands-on vivisection experiments, could not get his study into its own category. Despite Magendie being one of the leading innovators of his time, it was still a battle for him to become an official member of the academy, a feat he would later accomplish in 1821. He further improved the reverence of the academy when he and anatomist Charles Bell produced the widely known "Bell-Magendie Law". From 1795 until 1914, the first world war, the French Academy of Science was the most prevalent organization of French science. Almost all the old members of the previously abolished Académie were formally re-elected and retook their ancient seats. Among the exceptions was Dominique, comte de Cassini, who refused to take his seat. Membership in the academy was not restricted to scientists: in 1798 Napoleon Bonaparte was elected a member of the academy and three years later a president in connection with his Egyptian expedition, which had a scientific component. In 1816, the again renamed "Royal Academy of Sciences" became autonomous, while forming part of the Institute of France; the head of State became its patron. In the Second Republic, the name returned to Académie des sciences. During this period, the academy was funded by and accountable to the Ministry of Public Instruction. The academy came to control French patent laws in the course of the eighteenth century, acting as the liaison of artisans' knowledge to the public domain. As a result, academicians dominated technological activities in France. The academy proceedings were published under the name Comptes rendus de l'Académie des Sciences (1835–1965). The Comptes rendus is now a journal series with seven titles. The publications can be found on site of the French National Library. In 1818 the French Academy of Sciences launched a competition to explain the properties of light. The civil engineer Augustin-Jean Fresnel entered the competition by submitting a new wave theory of light. Siméon Denis Poisson, one of the members of the judging committee, studied Fresnel's theory in detail. Being a supporter of the particle-theory of light, he looked for a way to disprove it. Poisson thought that he had found a flaw when he demonstrate that Fresnel's theory predicts that an on-axis bright spot would exist in the shadow of a circular obstacle, where there should be complete darkness according to the particle-theory of light. The Poisson spot is not easily observed in every-day situations and so it was only natural for Poisson to interpret it as an absurd result and that it should disprove Fresnel's theory. However, the head of the committee, Dominique-François-Jean Arago, and who incidentally later became Prime Minister of France, decided to perform the experiment in more detail. He molded a 2-mm metallic disk to a glass plate with wax. To everyone's surprise he succeeded in observing the predicted spot, which convinced most scientists of the wave-nature of light. For three centuries women were not allowed as members of the academy. This meant that many women scientists were excluded, including two-time Nobel Prize winner Marie Curie, Nobel winner Irène Joliot-Curie, mathematician Sophie Germain, and many other deserving women scientists. The first woman admitted as a correspondent member was a student of Curie's, Marguerite Perey, in 1962. The first female full member was Yvonne Choquet-Bruhat in 1979. Membership in the academy is highly geared towards representing common French populace demographics. French population increases and changes in the early 21st century led to the academy expanding reference population sizes by reform in the early 2002. The overwhelming majority of members leave the academy posthumously, with a few exceptions of removals, transfers, and resignations. The last member to be removed from the academy was in 1944. Removal from the academy was often for not performing to standards, not performing at all, leaving the country, or political reasons. In some rare occasions, a member has been elected twice and subsequently removed twice. This is the case for Marie-Adolphe Carnot. == Government interference == The most direct involvement of the government in the affairs of the institute came in the initial nomination of members in 1795, but as its members nominated constituted only one third of the membership and most of these had previously been elected as members of the respective academies under the old regime, few objections were raised. Moreover, these nominated members were then completely free to nominate the remaining members of the institute. Members expected to remain such for life, but interference occurred in a few cases where the government suddenly terminated membership for political reasons. The other main interference came when the government refused to accept the result of academy elections. The academies control by the government was apparent in 1803, when Bonaparte decided on a general reorganization. His principal concern was not the First class but the Second, which included political scientists who were potential critics of his government. Bonaparte abolished the second class completely and, after a few expulsions, redistributed its remaining members, together with those of the Third class, into a new Second class concerned with literature and a new Third class devoted to the fine arts. Still this relationship between the academy and the government was not a one-way affair, as members expected to receive their payment of an honorarium. == Decline == Although the academy still exists today, after World War I, the reputation and status of the academy was largely questioned. One factor behind its decline was the development from a meritocracy to gerontocracy: a shift from those with demonstrated scientific ability leading the academy to instead favoring those with seniority. It became known as a sort of "hall of fame" that lost control, real and symbolic, of the professional scientific diversity in France at the time. Another factor was that in the span of five years, 1909 to 1914, funding to science faculties considerably dropped, eventually leading to a financial crisis in France. == Present use == Today the academy is one of five academies comprising the Institut de France. Its members are elected for life. Currently, there are 150 full members, 300 corresponding members, and 120 foreign associates. They are divided into two scientific groups: the Mathematical and Physical sciences and their applications and the Chemical, Biological, Geological and Medical sciences and their applications. The academy currently has five missions that it pursues. These being the encouraging of the scientific life, promoting the teaching of science, transmitting knowledge between scientific communities, fostering international collaborations, and ensuring a dual role of expertise and advise. The French Academy of Science originally focused its development efforts into creating a true co-development Euro-African program beginning in 1997. Since then they have broadened their scope of action to other regions of the world. The standing committee COPED is in charge of the international development projects undertaken by the French Academy of Science and their associates. The current president of COPED is Pierre Auger, the vice president is Michel Delseny, and the honorary president is Francois Gros. All of which are current members of the French Academy of Science. COPED has hosted several workshops or colloquia in Paris, involving representatives from African academies, universities or research centers, addressing a variety of themes and challenges dealing with African development and covering a large field spectrum. Specifically higher education in sciences, and research practices in basic and applied sciences that deal with various aspects relevant to development (renewable energy, infectious diseases, animal pathologies, food resources, access to safe water, agriculture, urban health, etc.). == Current committees and working parties == The Academic Standing Committees and Working Parties prepare the advice notes, policy statements and the Academic Reports. Some have a statutory remit, such as the Select Committee, the Committee for International Affairs and the Committee for Scientists' Rights, some are created ad hoc by the academy and approved formally by vote in a members-only session. Today the academies standing committees and working parties include: The Academic Standing Committee in charge of the Biennial Report on Science and Technology The Academic Standing Committee for Science, Ethics and Society The Academic Standing Committee for the Environment The Academic Standing Committee for Space Research The Academic Standing Committee for Science and Metrology The Academic Standing Committee for the Science History and Epistemology The Academic Standing Committee for Science and Safety Issues The Academic Standing Committee for Science Education and Training The Academic Standing La main à la pâte Committee The Academic Standing Committee for the Defense of Scientists' Rights (CODHOS) The Academic Standing Committee for International Affairs (CORI) The French Committee for International Scientific Unions (COFUSI) The Academic Standing Committee for Scientific and Technological International Relations (CARIST) The Academic Standing Committee for Developing Countries (COPED) The Inter-academic Group for Development (GID) – Cf. for further reading The Academic Standing Commission for Sealed Deposits The Academic Standing Committee for Terminology and Neologisms The Antoine Lavoisier Standing Committee The Academic Standing Committee for Prospects in Energy Procurement The Special Academic Working Party on Scientific Computing The Special Academic Working Party on Material Sciences and Engineering == Medals, awards and prizes == Each year, the Academy of Sciences distributes about 80 prizes. These include: Marie Skłodowska-Curie and Pierre Curie Polish-French Science Award, created in 2022. the Grande Médaille, awarded annually, in rotation, in the relevant disciplines of each division of the academy, to a French or foreign scholar who has contributed to the development of science in a decisive way. the Lalande Prize, awarded from 1802 through 1970, for outstanding achievement in astronomy the Valz Prize, awarded from 1877 through 1970, to honor advances in astronomy the Richard Lounsbery Award, jointly with the National Academy of Sciences the Prix Jacques Herbrand, for mathematics and physics the Prix Paul Pascal, for chemistry the Louis Bachelier Prize for major contributions to mathematical modeling in finance the Prix Michel Montpetit for computer science and applied mathematics, awarded since 1977 the Leconte Prize, awarded annually since 1886, to recognize important discoveries in mathematics, physics, chemistry, natural history or medicine the Prix Tchihatcheff (Tchihatchef; Chikhachev) == People == The following are incomplete lists of the officers of the academy. See also Category:Officers of the French Academy of Sciences. For a list of the academy's members past and present, see Category:Members of the French Academy of Sciences === Presidents === Source: French Academy of Sciences === Treasurers === ?–1788 Georges-Louis Leclerc, Comte de Buffon 1788–1791 Mathieu Tillet === Permanent secretaries === ==== General ==== ==== Mathematical Sciences ==== ==== Physical Sciences ==== ==== Chemistry and Biology ==== == Publications == Publications of the French Academy of Sciences "Histoire de l'Académie royale des sciences" (1700–1790) == See also == French art salons and academies French Geodesic Mission History of the metre Seconds pendulum Royal Commission on Animal Magnetism == Notes == == References == == External links == Official website (in French) – English-language version Complete listing of current members Notes on the Académie des Sciences from the Scholarly Societies project (includes information on the society journals) Search the Proceedings of the Académie des sciences in the French National Library (search item: Comptes Rendus) Comptes rendus de l'Académie des sciences. Série 1, Mathématique in Gallica, the digital library of the BnF.
Wikipedia/Académie_des_sciences
In number theory, the divisor summatory function is a function that is a sum over the divisor function. It frequently occurs in the study of the asymptotic behaviour of the Riemann zeta function. The various studies of the behaviour of the divisor function are sometimes called divisor problems. == Definition == The divisor summatory function is defined as D ( x ) = ∑ n ≤ x d ( n ) = ∑ j , k j k ≤ x 1 {\displaystyle D(x)=\sum _{n\leq x}d(n)=\sum _{j,k \atop jk\leq x}1} where d ( n ) = σ 0 ( n ) = ∑ j , k j k = n 1 {\displaystyle d(n)=\sigma _{0}(n)=\sum _{j,k \atop jk=n}1} is the divisor function. The divisor function counts the number of ways that the integer n can be written as a product of two integers. More generally, one defines D k ( x ) = ∑ n ≤ x d k ( n ) = ∑ m ≤ x ∑ m n ≤ x d k − 1 ( n ) {\displaystyle D_{k}(x)=\sum _{n\leq x}d_{k}(n)=\sum _{m\leq x}\sum _{mn\leq x}d_{k-1}(n)} where dk(n) counts the number of ways that n can be written as a product of k numbers. This quantity can be visualized as the count of the number of lattice points fenced off by a hyperbolic surface in k dimensions. Thus, for k = 2, D(x) = D2(x) counts the number of points on a square lattice bounded on the left by the vertical-axis, on the bottom by the horizontal-axis, and to the upper-right by the hyperbola jk = x. Roughly, this shape may be envisioned as a hyperbolic simplex. This allows us to provide an alternative expression for D(x), and a simple way to compute it in O ( x ) {\displaystyle O({\sqrt {x}})} time: D ( x ) = ∑ k = 1 x ⌊ x k ⌋ = 2 ∑ k = 1 u ⌊ x k ⌋ − u 2 {\displaystyle D(x)=\sum _{k=1}^{x}\left\lfloor {\frac {x}{k}}\right\rfloor =2\sum _{k=1}^{u}\left\lfloor {\frac {x}{k}}\right\rfloor -u^{2}} , where u = ⌊ x ⌋ {\displaystyle u=\left\lfloor {\sqrt {x}}\right\rfloor } If the hyperbola in this context is replaced by a circle then determining the value of the resulting function is known as the Gauss circle problem. Sequence of D(n) (sequence A006218 in the OEIS): 0, 1, 3, 5, 8, 10, 14, 16, 20, 23, 27, 29, 35, 37, 41, 45, 50, 52, 58, 60, 66, 70, 74, 76, 84, 87, 91, 95, 101, 103, 111, ... == Dirichlet's divisor problem == Finding a closed form for this summed expression seems to be beyond the techniques available, but it is possible to give approximations. The leading behavior of the series is given by D ( x ) = x log ⁡ x + x ( 2 γ − 1 ) + Δ ( x ) {\displaystyle D(x)=x\log x+x(2\gamma -1)+\Delta (x)\ } where γ {\displaystyle \gamma } is the Euler–Mascheroni constant, and the error term is Δ ( x ) = O ( x ) . {\displaystyle \Delta (x)=O\left({\sqrt {x}}\right).} Here, O {\displaystyle O} denotes Big-O notation. This estimate can be proven using the Dirichlet hyperbola method, and was first established by Dirichlet in 1849.: 37–38, 69  The Dirichlet divisor problem, precisely stated, is to improve this error bound by finding the smallest value of θ {\displaystyle \theta } for which Δ ( x ) = O ( x θ + ϵ ) {\displaystyle \Delta (x)=O\left(x^{\theta +\epsilon }\right)} holds true for all ϵ > 0 {\displaystyle \epsilon >0} . As of today, this problem remains unsolved. Progress has been slow. Many of the same methods work for this problem and for Gauss's circle problem, another lattice-point counting problem. Section F1 of Unsolved Problems in Number Theory surveys what is known and not known about these problems. In 1904, G. Voronoi proved that the error term can be improved to O ( x 1 / 3 log ⁡ x ) . {\displaystyle O(x^{1/3}\log x).} : 381  In 1916, G. H. Hardy showed that inf θ ≥ 1 / 4 {\displaystyle \inf \theta \geq 1/4} . In particular, he demonstrated that for some constant K {\displaystyle K} , there exist values of x for which Δ ( x ) > K x 1 / 4 {\displaystyle \Delta (x)>Kx^{1/4}} and values of x for which Δ ( x ) < − K x 1 / 4 {\displaystyle \Delta (x)<-Kx^{1/4}} .: 69  In 1922, J. van der Corput improved Dirichlet's bound to inf θ ≤ 33 / 100 = 0.33 {\displaystyle \inf \theta \leq 33/100=0.33} .: 381  In 1928, van der Corput proved that inf θ ≤ 27 / 82 = 0.3 29268 ¯ {\displaystyle \inf \theta \leq 27/82=0.3{\overline {29268}}} .: 381  In 1950, Chih Tsung-tao and independently in 1953 H. E. Richert proved that inf θ ≤ 15 / 46 = 0.32608695652... {\displaystyle \inf \theta \leq 15/46=0.32608695652...} .: 381  In 1969, Grigori Kolesnik demonstrated that inf θ ≤ 12 / 37 = 0. 324 ¯ {\displaystyle \inf \theta \leq 12/37=0.{\overline {324}}} .: 381  In 1973, Kolesnik demonstrated that inf θ ≤ 346 / 1067 = 0.32427366448... {\displaystyle \inf \theta \leq 346/1067=0.32427366448...} .: 381  In 1982, Kolesnik demonstrated that inf θ ≤ 35 / 108 = 0.32 407 ¯ {\displaystyle \inf \theta \leq 35/108=0.32{\overline {407}}} .: 381  In 1988, H. Iwaniec and C. J. Mozzochi proved that inf θ ≤ 7 / 22 = 0.3 18 ¯ {\displaystyle \inf \theta \leq 7/22=0.3{\overline {18}}} . In 2003, M.N. Huxley improved this to show that inf θ ≤ 131 / 416 = 0.31490384615... {\displaystyle \inf \theta \leq 131/416=0.31490384615...} . So, inf θ {\displaystyle \inf \theta } lies somewhere between 1/4 and 131/416 (approx. 0.3149); it is widely conjectured to be 1/4. Theoretical evidence lends credence to this conjecture, since Δ ( x ) / x 1 / 4 {\displaystyle \Delta (x)/x^{1/4}} has a (non-Gaussian) limiting distribution. The value of 1/4 would also follow from a conjecture on exponent pairs. == Piltz divisor problem == In the generalized case, one has D k ( x ) = x P k ( log ⁡ x ) + Δ k ( x ) {\displaystyle D_{k}(x)=xP_{k}(\log x)+\Delta _{k}(x)\,} where P k {\displaystyle P_{k}} is a polynomial of degree k − 1 {\displaystyle k-1} . Using simple estimates, it is readily shown that Δ k ( x ) = O ( x 1 − 1 / k log k − 2 ⁡ x ) {\displaystyle \Delta _{k}(x)=O\left(x^{1-1/k}\log ^{k-2}x\right)} for integer k ≥ 2 {\displaystyle k\geq 2} . As in the k = 2 {\displaystyle k=2} case, the infimum of the bound is not known for any value of k {\displaystyle k} . Computing these infima is known as the Piltz divisor problem, after the name of the German mathematician Adolf Piltz (also see his German page). Defining the order α k {\displaystyle \alpha _{k}} as the smallest value for which Δ k ( x ) = O ( x α k + ε ) {\displaystyle \Delta _{k}(x)=O\left(x^{\alpha _{k}+\varepsilon }\right)} holds, for any ε > 0 {\displaystyle \varepsilon >0} , one has the following results (note that α 2 {\displaystyle \alpha _{2}} is the θ {\displaystyle \theta } of the previous section): α 2 ≤ 131 416 , {\displaystyle \alpha _{2}\leq {\frac {131}{416}}\ ,} α 3 ≤ 43 96 , {\displaystyle \alpha _{3}\leq {\frac {43}{96}}\ ,} and α k ≤ 3 k − 4 4 k ( 4 ≤ k ≤ 8 ) α 9 ≤ 35 54 , α 10 ≤ 41 60 , α 11 ≤ 7 10 α k ≤ k − 2 k + 2 ( 12 ≤ k ≤ 25 ) α k ≤ k − 1 k + 4 ( 26 ≤ k ≤ 50 ) α k ≤ 31 k − 98 32 k ( 51 ≤ k ≤ 57 ) α k ≤ 7 k − 34 7 k ( k ≥ 58 ) {\displaystyle {\begin{aligned}\alpha _{k}&\leq {\frac {3k-4}{4k}}\quad (4\leq k\leq 8)\\[6pt]\alpha _{9}&\leq {\frac {35}{54}}\ ,\quad \alpha _{10}\leq {\frac {41}{60}}\ ,\quad \alpha _{11}\leq {\frac {7}{10}}\\[6pt]\alpha _{k}&\leq {\frac {k-2}{k+2}}\quad (12\leq k\leq 25)\\[6pt]\alpha _{k}&\leq {\frac {k-1}{k+4}}\quad (26\leq k\leq 50)\\[6pt]\alpha _{k}&\leq {\frac {31k-98}{32k}}\quad (51\leq k\leq 57)\\[6pt]\alpha _{k}&\leq {\frac {7k-34}{7k}}\quad (k\geq 58)\end{aligned}}} E. C. Titchmarsh conjectures that α k = k − 1 2 k . {\displaystyle \alpha _{k}={\frac {k-1}{2k}}\ .} == Mellin transform == Both portions may be expressed as Mellin transforms: D ( x ) = 1 2 π i ∫ c − i ∞ c + i ∞ ζ 2 ( w ) x w w d w {\displaystyle D(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }\zeta ^{2}(w){\frac {x^{w}}{w}}\,dw} for c > 1 {\displaystyle c>1} . Here, ζ ( s ) {\displaystyle \zeta (s)} is the Riemann zeta function. Similarly, one has Δ ( x ) = 1 2 π i ∫ c ′ − i ∞ c ′ + i ∞ ζ 2 ( w ) x w w d w {\displaystyle \Delta (x)={\frac {1}{2\pi i}}\int _{c^{\prime }-i\infty }^{c^{\prime }+i\infty }\zeta ^{2}(w){\frac {x^{w}}{w}}\,dw} with 0 < c ′ < 1 {\displaystyle 0<c^{\prime }<1} . The leading term of D ( x ) {\displaystyle D(x)} is obtained by shifting the contour past the double pole at w = 1 {\displaystyle w=1} : the leading term is just the residue, by Cauchy's integral formula. In general, one has D k ( x ) = 1 2 π i ∫ c − i ∞ c + i ∞ ζ k ( w ) x w w d w {\displaystyle D_{k}(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }\zeta ^{k}(w){\frac {x^{w}}{w}}\,dw} and likewise for Δ k ( x ) {\displaystyle \Delta _{k}(x)} , for k ≥ 2 {\displaystyle k\geq 2} . == Notes == == References == H.M. Edwards, Riemann's Zeta Function, (1974) Dover Publications, ISBN 0-486-41740-9 E. C. Titchmarsh, The theory of the Riemann Zeta-Function, (1951) Oxford at the Clarendon Press, Oxford. (See chapter 12 for a discussion of the generalized divisor problem) Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001 (Provides an introductory statement of the Dirichlet divisor problem.) H. E. Rose. A Course in Number Theory., Oxford, 1988. M.N. Huxley (2003) 'Exponential Sums and Lattice Points III', Proc. London Math. Soc. (3)87: 591–609
Wikipedia/Divisor_summatory_function
Selenographia, sive Lunae descriptio (Selenography, or A Description of The Moon) was printed in 1647 and is a milestone work by Johannes Hevelius. It includes the first detailed map of the Moon, created from Hevelius's personal observations. In his treatise, Hevelius reflected on the difference between his own work and that of Galileo Galilei. Hevelius remarked that the quality of Galileo's representations of the Moon in Sidereus nuncius (1610) left something to be desired. Selenography was dedicated to King Ladislaus IV of Poland and along with Riccioli/Grimaldi's Almagestum Novum became the standard work on the Moon for over a century. There are many copies that have survived, including those in Bibliothèque nationale de France, in the library of Polish Academy of Sciences, in the Stillman Drake Collection at the Thomas Fisher Rare Books Library at the University of Toronto, and in the Gunnerus Library at the Norwegian University of Science and Technology in Trondheim. == Notes == == External links == Selenographia, sive Lunae descriptio
Wikipedia/Selenographia
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than the absolute value of the divisor. A fundamental property is that the quotient and the remainder exist and are unique, under some conditions. Because of this uniqueness, Euclidean division is often considered without referring to any method of computation, and without explicitly computing the quotient and the remainder. The methods of computation are called integer division algorithms, the best known of which being long division. Euclidean division, and algorithms to compute it, are fundamental for many questions concerning integers, such as the Euclidean algorithm for finding the greatest common divisor of two integers, and modular arithmetic, for which only remainders are considered. The operation consisting of computing only the remainder is called the modulo operation, and is used often in both mathematics and computer science. == Division theorem == Euclidean division is based on the following result, which is sometimes called Euclid's division lemma. Given two integers a and b, with b ≠ 0, there exist unique integers q and r such that a = bq + r and 0 ≤ r < |b|, where |b| denotes the absolute value of b. In the above theorem, each of the four integers has a name of its own: a is called the dividend, b is called the divisor, q is called the quotient and r is called the remainder. The computation of the quotient and the remainder from the dividend and the divisor is called division, or in case of ambiguity, Euclidean division. The theorem is frequently referred to as the division algorithm (although it is a theorem and not an algorithm), because its proof as given below lends itself to a simple division algorithm for computing q and r (see the section Proof for more). Division is not defined in the case where b = 0; see division by zero. For the remainder and the modulo operation, there are conventions other than 0 ≤ r < |b|, see § Other intervals for the remainder. === Generalization === Although originally restricted to integers, Euclidean division and the division theorem can be generalized to univariate polynomials over a field and to Euclidean domains. In the case of univariate polynomials, the main difference is that the inequalities 0 ≤ r < | b | {\displaystyle 0\leq r<|b|} are replaced with r = 0 {\displaystyle r=0} or deg ⁡ r < deg ⁡ b , {\displaystyle \deg r<\deg b,} where deg {\displaystyle \deg } denotes the polynomial degree. In the generalization to Euclidean domains, the inequality becomes r = 0 {\displaystyle r=0} or f ( r ) < f ( b ) , {\displaystyle f(r)<f(b),} where f {\displaystyle f} denote a specific function from the domain to the natural numbers called a "Euclidean function". The uniqueness of the quotient and the remainder remains true for polynomials, but it is false in general. == History == Although "Euclidean division" is named after Euclid, it seems that he did not know the existence and uniqueness theorem, and that the only computation method that he knew was the division by repeated subtraction. Before the invention of Hindu–Arabic numeral system, which was introduced in Europe during the 13th century by Fibonacci, division was extremely difficult, and only the best mathematicians were able to do it . Presently, most division algorithms, including long division, are based on this numeral system or its variants, such as binary numerals. A notable exception is Newton–Raphson division, which is independent from any numeral system. The term "Euclidean division" was introduced during the 20th century as a shorthand for "division of Euclidean rings". It has been rapidly adopted by mathematicians for distinguishing this division from the other kinds of division of numbers. == Intuitive example == Suppose that a pie has 9 slices and they are to be divided evenly among 4 people. Using Euclidean division, 9 divided by 4 is 2 with remainder 1. In other words, each person receives 2 slices of pie, and there is 1 slice left over. This can be confirmed using multiplication, the inverse of division: if each of the 4 people received 2 slices, then 4 × 2 = 8 slices were given out in total. Adding the 1 slice remaining, the result is 9 slices. In summary: 9 = 4 × 2 + 1. In general, if the number of slices is denoted a {\displaystyle a} and the number of people is denoted b {\displaystyle b} , then one can divide the pie evenly among the people such that each person receives q {\displaystyle q} slices (the quotient), with some number of slices r < b {\displaystyle r<b} being the leftover (the remainder). In which case, the equation a = b q + r {\displaystyle a=bq+r} holds. If 9 slices were divided among 3 people instead of 4, then each would receive 3 and no slice would be left over, which means that the remainder would be zero, leading to the conclusion that 3 evenly divides 9, or that 3 divides 9. Euclidean division can also be extended to negative dividend (or negative divisor) using the same formula; for example −9 = 4 × (−3) + 3, which means that −9 divided by 4 is −3 with remainder 3. == Examples == If a = 7 and b = 3, then q = 2 and r = 1, since 7 = 3 × 2 + 1. If a = 7 and b = −3, then q = −2 and r = 1, since 7 = −3 × (−2) + 1. If a = −7 and b = 3, then q = −3 and r = 2, since −7 = 3 × (−3) + 2. If a = −7 and b = −3, then q = 3 and r = 2, since −7 = −3 × 3 + 2. == Proof == The following proof of the division theorem relies on the fact that a decreasing sequence of non-negative integers stops eventually. It is separated into two parts: one for existence and another for uniqueness of q {\displaystyle q} and r {\displaystyle r} . Other proofs use the well-ordering principle (i.e., the assertion that every non-empty set of non-negative integers has a smallest element) to make the reasoning simpler, but have the disadvantage of not providing directly an algorithm for solving the division (see § Effectiveness for more). === Existence === For proving the existence of Euclidean division, one can suppose b > 0 , {\displaystyle b>0,} since, if b < 0 , {\displaystyle b<0,} the equality a = b q + r {\displaystyle a=bq+r} can be rewritten a = ( − b ) ( − q ) + r . {\displaystyle a=(-b)(-q)+r.} So, if the latter equality is a Euclidean division with − b > 0 , {\displaystyle -b>0,} the former is also a Euclidean division. Given b > 0 {\displaystyle b>0} and a , {\displaystyle a,} there are integers q 1 {\displaystyle q_{1}} and r 1 ≥ 0 {\displaystyle r_{1}\geq 0} such that a = b q 1 + r 1 ; {\displaystyle a=bq_{1}+r_{1};} for example, q 1 = 0 {\displaystyle q_{1}=0} and r 1 = a {\displaystyle r_{1}=a} if a ≥ 0 , {\displaystyle a\geq 0,} and otherwise q 1 = a {\displaystyle q_{1}=a} and r 1 = a − a b . {\displaystyle r_{1}=a-ab.} Let q {\displaystyle q} and r {\displaystyle r} be such a pair of numbers for which r {\displaystyle r} is nonnegative and minimal. If r < b . {\displaystyle r<b.} we have Euclidean division. Thus, we have to prove that, if r ≥ b , {\displaystyle r\geq b,} then r {\displaystyle r} is not minimal. Indeed, if r ≥ b , {\displaystyle r\geq b,} one has a = b ( q + 1 ) + ( r − b ) , {\displaystyle a=b(q+1)+(r-b),} with 0 ≤ r − b < r , {\displaystyle 0\leq r-b<r,} and r {\displaystyle r} is not minimal This proves the existence in all cases. This provides also an algorithm for computing the quotient and the remainder, by starting from q = 0 {\displaystyle q=0} (if a ≥ 0 {\displaystyle a\geq 0} ) and adding 1 {\displaystyle 1} to it until a − b q < b . {\displaystyle a-bq<b.} However, this algorithm is not efficient, since its number of steps is of the order of a / b {\displaystyle a/b} === Uniqueness === The pair of integers r and q such that a = bq + r is unique, in the sense that there can be no other pair of integers that satisfy the same condition in the Euclidean division theorem. In other words, if we have another division of a by b, say a = bq' + r' with 0 ≤ r' < |b|, then we must have that q' = q and r' = r. To prove this statement, we first start with the assumptions that 0 ≤ r < |b| 0 ≤ r' < |b| a = bq + r a = bq' + r' Subtracting the two equations yields b(q – q′) = r′ – r. So b is a divisor of r′ – r. As |r′ – r| < |b| by the above inequalities, one gets r′ – r = 0, and b(q – q′) = 0. Since b ≠ 0, we get that r = r′ and q = q′, which proves the uniqueness part of the Euclidean division theorem. == Effectiveness == In general, an existence proof does not provide an algorithm for computing the existing quotient and remainder, but the above proof does immediately provide an algorithm (see Division algorithm#Division by repeated subtraction), even though it is not a very efficient one as it requires as many steps as the size of the quotient. This is related to the fact that it uses only additions, subtractions and comparisons of integers, without involving multiplication, nor any particular representation of the integers such as decimal notation. In terms of decimal notation, long division provides a much more efficient algorithm for solving Euclidean divisions. Its generalization to binary and hexadecimal notation provides further flexibility and possibility for computer implementation. However, for large inputs, algorithms that reduce division to multiplication, such as Newton–Raphson, are usually preferred, because they only need a time which is proportional to the time of the multiplication needed to verify the result—independently of the multiplication algorithm which is used (for more, see Division algorithm#Fast division methods). == Variants == The Euclidean division admits a number of variants, some of which are listed below. === Other intervals for the remainder === In Euclidean division with d as divisor, the remainder is supposed to belong to the interval [0, d) of length |d|. Any other interval of the same length may be used. More precisely, given integers m {\displaystyle m} , a {\displaystyle a} , d {\displaystyle d} with m > 0 {\displaystyle m>0} , there exist unique integers q {\displaystyle q} and r {\displaystyle r} with d ≤ r < m + d {\displaystyle d\leq r<m+d} such that a = m q + r {\displaystyle a=mq+r} . In particular, if d = − ⌊ m 2 ⌋ {\displaystyle d=-\left\lfloor {\frac {m}{2}}\right\rfloor } then − ⌊ m 2 ⌋ ≤ r < m − ⌊ m 2 ⌋ {\displaystyle -\left\lfloor {\frac {m}{2}}\right\rfloor \leq r<m-\left\lfloor {\frac {m}{2}}\right\rfloor } . This division is called the centered division, and its remainder r {\displaystyle r} is called the centered remainder or the least absolute remainder. This is used for approximating real numbers: Euclidean division defines truncation, and centered division defines rounding. === Montgomery division === Given integers a {\displaystyle a} , m {\displaystyle m} and R , {\displaystyle R,} with m > 0 {\displaystyle m>0} and gcd ( R , m ) = 1 , {\displaystyle \gcd(R,m)=1,} let R − 1 {\displaystyle R^{-1}} be the modular multiplicative inverse of R {\displaystyle R} (i.e., 0 < R − 1 < m {\displaystyle 0<R^{-1}<m} with R − 1 R − 1 {\displaystyle R^{-1}R-1} being a multiple of m {\displaystyle m} ), then there exist unique integers q {\displaystyle q} and r {\displaystyle r} with 0 ≤ r < m {\displaystyle 0\leq r<m} such that a = m q + R − 1 ⋅ r {\displaystyle a=mq+R^{-1}\cdot r} . This result generalizes Hensel's odd division (1900). The value r {\displaystyle r} is the N-residue defined in Montgomery reduction. == In Euclidean domains == Euclidean domains (also known as Euclidean rings) are defined as integral domains which support the following generalization of Euclidean division: Given an element a and a non-zero element b in a Euclidean domain R equipped with a Euclidean function d (also known as a Euclidean valuation or degree function), there exist q and r in R such that a = bq + r and either r = 0 or d(r) < d(b). Uniqueness of q and r is not required. It occurs only in exceptional cases, typically for univariate polynomials, and for integers, if the further condition r ≥ 0 is added. Examples of Euclidean domains include fields, polynomial rings in one variable over a field, and the Gaussian integers. The Euclidean division of polynomials has been the object of specific developments. == See also == Euclid's lemma Euclidean algorithm == Notes == == References == Fraleigh, John B. (1993), A First Course in Abstract Algebra (5th ed.), Addison-Wesley, ISBN 978-0-201-53467-2 Rotman, Joseph J. (2006), A First Course in Abstract Algebra with Applications (3rd ed.), Prentice-Hall, ISBN 978-0-13-186267-8
Wikipedia/Euclid's_division_lemma
In the mathematical field of group theory, Lagrange's theorem states that if H is a subgroup of any finite group G, then | H | {\displaystyle |H|} is a divisor of | G | {\displaystyle |G|} , i.e. the order (number of elements) of every subgroup H divides the order of group G. The theorem is named after Joseph-Louis Lagrange. The following variant states that for a subgroup H {\displaystyle H} of a finite group G {\displaystyle G} , not only is | G | / | H | {\displaystyle |G|/|H|} an integer, but its value is the index [ G : H ] {\displaystyle [G:H]} , defined as the number of left cosets of H {\displaystyle H} in G {\displaystyle G} . This variant holds even if G {\displaystyle G} is infinite, provided that | G | {\displaystyle |G|} , | H | {\displaystyle |H|} , and [ G : H ] {\displaystyle [G:H]} are interpreted as cardinal numbers. == Proof == The left cosets of H in G are the equivalence classes of a certain equivalence relation on G: specifically, call x and y in G equivalent if there exists h in H such that x = yh. Therefore, the set of left cosets forms a partition of G. Each left coset aH has the same cardinality as H because x ↦ a x {\displaystyle x\mapsto ax} defines a bijection H → a H {\displaystyle H\to aH} (the inverse is y ↦ a − 1 y {\displaystyle y\mapsto a^{-1}y} ). The number of left cosets is the index [G : H]. By the previous three sentences, | G | = [ G : H ] ⋅ | H | . {\displaystyle \left|G\right|=\left[G:H\right]\cdot \left|H\right|.} == Extension == Lagrange's theorem can be extended to the equation of indexes between three subgroups of G. If we take K = {e} (e is the identity element of G), then [G : {e}] = |G| and [H : {e}] = |H|. Therefore, we can recover the original equation |G| = [G : H] |H|. == Applications == A consequence of the theorem is that the order of any element a of a finite group (i.e. the smallest positive integer number k with ak = e, where e is the identity element of the group) divides the order of that group, since the order of a is equal to the order of the cyclic subgroup generated by a. If the group has n elements, it follows a n = e . {\displaystyle \displaystyle a^{n}=e{\mbox{.}}} This can be used to prove Fermat's little theorem and its generalization, Euler's theorem. These special cases were known long before the general theorem was proved. The theorem also shows that any group of prime order is cyclic and simple, since the subgroup generated by any non-identity element must be the whole group itself. Lagrange's theorem can also be used to show that there are infinitely many primes: suppose there were a largest prime p {\displaystyle p} . Any prime divisor q {\displaystyle q} of the Mersenne number 2 p − 1 {\displaystyle 2^{p}-1} satisfies 2 p ≡ 1 ( mod q ) {\displaystyle 2^{p}\equiv 1{\pmod {q}}} (see modular arithmetic), meaning that the order of 2 {\displaystyle 2} in the multiplicative group ( Z / q Z ) ∗ {\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}} is p {\displaystyle p} . By Lagrange's theorem, the order of 2 {\displaystyle 2} must divide the order of ( Z / q Z ) ∗ {\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{*}} , which is q − 1 {\displaystyle q-1} . So p {\displaystyle p} divides q − 1 {\displaystyle q-1} , giving p < q {\displaystyle p<q} , contradicting the assumption that p {\displaystyle p} is the largest prime. == Existence of subgroups of given order == Lagrange's theorem raises the converse question as to whether every divisor of the order of a group is the order of some subgroup. This does not hold in general: given a finite group G and a divisor d of |G|, there does not necessarily exist a subgroup of G with order d. The smallest example is A4 (the alternating group of degree 4), which has 12 elements but no subgroup of order 6. A "Converse of Lagrange's Theorem" (CLT) group is a finite group with the property that for every divisor of the order of the group, there is a subgroup of that order. It is known that a CLT group must be solvable and that every supersolvable group is a CLT group. However, there exist solvable groups that are not CLT (for example, A4) and CLT groups that are not supersolvable (for example, S4, the symmetric group of degree 4). There are partial converses to Lagrange's theorem. For general groups, Cauchy's theorem guarantees the existence of an element, and hence of a cyclic subgroup, of order any prime dividing the group order. Sylow's theorem extends this to the existence of a subgroup of order equal to the maximal power of any prime dividing the group order. For solvable groups, Hall's theorems assert the existence of a subgroup of order equal to any unitary divisor of the group order (that is, a divisor coprime to its cofactor). === Counterexample of the converse of Lagrange's theorem === The converse of Lagrange's theorem states that if d is a divisor of the order of a group G, then there exists a subgroup H where |H| = d. We will examine the alternating group A4, the set of even permutations as the subgroup of the Symmetric group S4. A4 = {e, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3), (1 2 3), (1 3 2), (1 2 4), (1 4 2), (1 3 4), (1 4 3), (2 3 4), (2 4 3)}. |A4| = 12 so the divisors are 1, 2, 3, 4, 6, 12. Assume to the contrary that there exists a subgroup H in A4 with |H| = 6. Let V be the non-cyclic subgroup of A4 called the Klein four-group. V = {e, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}. Let K = H ⋂ V. Since both H and V are subgroups of A4, K is also a subgroup of A4. From Lagrange's theorem, the order of K must divide both 6 and 4, the orders of H and V respectively. The only two positive integers that divide both 6 and 4 are 1 and 2. So |K| = 1 or 2. Assume |K| = 1, then K = {e}. If H does not share any elements with V, then the 5 elements in H besides the Identity element e must be of the form (a b c) where a, b, c are distinct elements in {1, 2, 3, 4}. Since any element of the form (a b c) squared is (a c b), and (a b c)(a c b) = e, any element of H in the form (a b c) must be paired with its inverse. Specifically, the remaining 5 elements of H must come from distinct pairs of elements in A4 that are not in V. This is impossible since pairs of elements must be even and cannot total up to 5 elements. Thus, the assumptions that |K| = 1 is wrong, so |K| = 2. Then, K = {e, v} where v ∈ V, v must be in the form (a b)(c d) where a, b, c, d are distinct elements of {1, 2, 3, 4}. The other four elements in H are cycles of length 3. Note that the cosets generated by a subgroup of a group form a partition of the group. The cosets generated by a specific subgroup are either identical to each other or disjoint. The index of a subgroup in a group [A4 : H] = |A4|/|H| is the number of cosets generated by that subgroup. Since |A4| = 12 and |H| = 6, H will generate two left cosets, one that is equal to H and another, gH, that is of length 6 and includes all the elements in A4 not in H. Since there are only 2 distinct cosets generated by H, then H must be normal. Because of that, H = gHg−1 (∀g ∈ A4). In particular, this is true for g = (a b c) ∈ A4. Since H = gHg−1, gvg−1 ∈ H. Without loss of generality, assume that a = 1, b = 2, c = 3, d = 4. Then g = (1 2 3), v = (1 2)(3 4), g−1 = (1 3 2), gv = (1 3 4), gvg−1 = (1 4)(2 3). Transforming back, we get gvg−1 = (a d)(b c). Because V contains all disjoint transpositions in A4, gvg−1 ∈ V. Hence, gvg−1 ∈ H ⋂ V = K. Since gvg−1 ≠ v, we have demonstrated that there is a third element in K. But earlier we assumed that |K| = 2, so we have a contradiction. Therefore, our original assumption that there is a subgroup of order 6 is not true and consequently there is no subgroup of order 6 in A4 and the converse of Lagrange's theorem is not necessarily true. Q.E.D. == History == Lagrange himself did not prove the theorem in its general form. He stated, in his article Réflexions sur la résolution algébrique des équations, that if a polynomial in n variables has its variables permuted in all n! ways, the number of different polynomials that are obtained is always a factor of n!. (For example, if the variables x, y, and z are permuted in all 6 possible ways in the polynomial x + y − z then we get a total of 3 different polynomials: x + y − z, x + z − y, and y + z − x. Note that 3 is a factor of 6.) The number of such polynomials is the index in the symmetric group Sn of the subgroup H of permutations that preserve the polynomial. (For the example of x + y − z, the subgroup H in S3 contains the identity and the transposition (x y).) So the size of H divides n!. With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name. In his Disquisitiones Arithmeticae in 1801, Carl Friedrich Gauss proved Lagrange's theorem for the special case of ( Z / p Z ) ∗ {\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{*}} , the multiplicative group of nonzero integers modulo p, where p is a prime. In 1844, Augustin-Louis Cauchy proved Lagrange's theorem for the symmetric group Sn. Camille Jordan finally proved Lagrange's theorem for the case of any permutation group in 1861. == Notes == == References == Bray, Henry G. (1968), "A note on CLT groups", Pacific J. Math., 27 (2): 229–231, doi:10.2140/pjm.1968.27.229 Gallian, Joseph (2006), Contemporary Abstract Algebra (6th ed.), Boston: Houghton Mifflin, ISBN 978-0-618-51471-7 Dummit, David S.; Foote, Richard M. (2004), Abstract algebra (3rd ed.), New York: John Wiley & Sons, ISBN 978-0-471-43334-7, MR 2286236 Roth, Richard R. (2001), "A History of Lagrange's Theorem on Groups", Mathematics Magazine, 74 (2): 99–108, doi:10.2307/2690624, JSTOR 2690624
Wikipedia/Lagrange's_theorem_(group_theory)
In number theory, the partition function p(n) represents the number of possible partitions of a non-negative integer n. For instance, p(4) = 5 because the integer 4 has the five partitions 1 + 1 + 1 + 1, 1 + 1 + 2, 1 + 3, 2 + 2, and 4. No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. Srinivasa Ramanujan first discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of n ends in the digit 4 or 9, the number of partitions of n will be divisible by 5. == Definition and examples == For a positive integer n, p(n) is the number of distinct ways of representing n as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct. By convention p(0) = 1, as there is one way (the empty sum) of representing zero as a sum of positive integers. Furthermore p(n) = 0 when n is negative. The first few values of the partition function, starting with p(0) = 1, are: Some exact values of p(n) for larger values of n include: p ( 100 ) = 190 , 569 , 292 p ( 1000 ) = 24 , 061 , 467 , 864 , 032 , 622 , 473 , 692 , 149 , 727 , 991 ≈ 2.40615 × 10 31 p ( 10000 ) = 36 , 167 , 251 , 325 , … , 906 , 916 , 435 , 144 ≈ 3.61673 × 10 106 {\displaystyle {\begin{aligned}p(100)&=190,\!569,\!292\\p(1000)&=24,\!061,\!467,\!864,\!032,\!622,\!473,\!692,\!149,\!727,\!991\approx 2.40615\times 10^{31}\\p(10000)&=36,\!167,\!251,\!325,\!\dots ,\!906,\!916,\!435,\!144\approx 3.61673\times 10^{106}\end{aligned}}} == Generating function == The generating function for p(n) is given by ∑ n = 0 ∞ p ( n ) x n = ∏ k = 1 ∞ ( 1 1 − x k ) = ( 1 + x + x 2 + ⋯ ) ( 1 + x 2 + x 4 + ⋯ ) ( 1 + x 3 + x 6 + ⋯ ) ⋯ = 1 1 − x − x 2 + x 5 + x 7 − x 12 − x 15 + x 22 + x 26 − ⋯ = 1 / ∑ k = − ∞ ∞ ( − 1 ) k x k ( 3 k − 1 ) / 2 . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }p(n)x^{n}&=\prod _{k=1}^{\infty }\left({\frac {1}{1-x^{k}}}\right)\\&=\left(1+x+x^{2}+\cdots \right)\left(1+x^{2}+x^{4}+\cdots \right)\left(1+x^{3}+x^{6}+\cdots \right)\cdots \\&={\frac {1}{1-x-x^{2}+x^{5}+x^{7}-x^{12}-x^{15}+x^{22}+x^{26}-\cdots }}\\&=1{\Big /}\sum _{k=-\infty }^{\infty }(-1)^{k}x^{k(3k-1)/2}.\end{aligned}}} The equality between the products on the first and second lines of this formula is obtained by expanding each factor 1 / ( 1 − x k ) {\displaystyle 1/(1-x^{k})} into the geometric series ( 1 + x k + x 2 k + x 3 k + ⋯ ) . {\displaystyle (1+x^{k}+x^{2k}+x^{3k}+\cdots ).} To see that the expanded product equals the sum on the first line, apply the distributive law to the product. This expands the product into a sum of monomials of the form x a 1 x 2 a 2 x 3 a 3 ⋯ {\displaystyle x^{a_{1}}x^{2a_{2}}x^{3a_{3}}\cdots } for some sequence of coefficients a i {\displaystyle a_{i}} , only finitely many of which can be non-zero. The exponent of the term is n = ∑ i a i {\textstyle n=\sum ia_{i}} , and this sum can be interpreted as a representation of n {\displaystyle n} as a partition into a i {\displaystyle a_{i}} copies of each number i {\displaystyle i} . Therefore, the number of terms of the product that have exponent n {\displaystyle n} is exactly p ( n ) {\displaystyle p(n)} , the same as the coefficient of x n {\displaystyle x^{n}} in the sum on the left. Therefore, the sum equals the product. The function that appears in the denominator in the third and fourth lines of the formula is the Euler function. The equality between the product on the first line and the formulas in the third and fourth lines is Euler's pentagonal number theorem. The exponents of x {\displaystyle x} in these lines are the pentagonal numbers P k = k ( 3 k − 1 ) / 2 {\displaystyle P_{k}=k(3k-1)/2} for k ∈ { 0 , 1 , − 1 , 2 , − 2 , … } {\displaystyle k\in \{0,1,-1,2,-2,\dots \}} (generalized somewhat from the usual pentagonal numbers, which come from the same formula for the positive values of k {\displaystyle k} ). The pattern of positive and negative signs in the third line comes from the term ( − 1 ) k {\displaystyle (-1)^{k}} in the fourth line: even choices of k {\displaystyle k} produce positive terms, and odd choices produce negative terms. More generally, the generating function for the partitions of n {\displaystyle n} into numbers selected from a set A {\displaystyle A} of positive integers can be found by taking only those terms in the first product for which k ∈ A {\displaystyle k\in A} . This result is due to Leonhard Euler. The formulation of Euler's generating function is a special case of a q {\displaystyle q} -Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function. == Recurrence relations == The same sequence of pentagonal numbers appears in a recurrence relation for the partition function: p ( n ) = ∑ k ∈ Z ∖ { 0 } ( − 1 ) k + 1 p ( n − k ( 3 k − 1 ) / 2 ) = p ( n − 1 ) + p ( n − 2 ) − p ( n − 5 ) − p ( n − 7 ) + p ( n − 12 ) + p ( n − 15 ) − p ( n − 22 ) − ⋯ {\displaystyle {\begin{aligned}p(n)&=\sum _{k\in \mathbb {Z} \setminus \{0\}}(-1)^{k+1}p(n-k(3k-1)/2)\\&=p(n-1)+p(n-2)-p(n-5)-p(n-7)+p(n-12)+p(n-15)-p(n-22)-\cdots \end{aligned}}} As base cases, p ( 0 ) {\displaystyle p(0)} is taken to equal 1 {\displaystyle 1} , and p ( k ) {\displaystyle p(k)} is taken to be zero for negative k {\displaystyle k} . Although the sum on the right side appears infinite, it has only finitely many nonzero terms, coming from the nonzero values of k {\displaystyle k} in the range − 24 n + 1 − 1 6 ≤ k ≤ 24 n + 1 + 1 6 . {\displaystyle -{\frac {{\sqrt {24n+1}}-1}{6}}\leq k\leq {\frac {{\sqrt {24n+1}}+1}{6}}.} The recurrence relation can also be written in the equivalent form p ( n ) = ∑ k = 1 ∞ ( − 1 ) k + 1 ( p ( n − k ( 3 k − 1 ) / 2 ) + p ( n − k ( 3 k + 1 ) / 2 ) ) . {\displaystyle p(n)=\sum _{k=1}^{\infty }(-1)^{k+1}{\big (}p(n-k(3k-1)/2)+p(n-k(3k+1)/2){\big )}.} Another recurrence relation for p ( n ) {\displaystyle p(n)} can be given in terms of the sum of divisors function σ: p ( n ) = 1 n ∑ k = 0 n − 1 σ ( n − k ) p ( k ) . {\displaystyle p(n)={\frac {1}{n}}\sum _{k=0}^{n-1}\sigma (n-k)p(k).} If q ( n ) {\displaystyle q(n)} denotes the number of partitions of n {\displaystyle n} with no repeated parts then it follows by splitting each partition into its even parts and odd parts, and dividing the even parts by two, that p ( n ) = ∑ k = 0 ⌊ n / 2 ⌋ q ( n − 2 k ) p ( k ) . {\displaystyle p(n)=\sum _{k=0}^{\left\lfloor n/2\right\rfloor }q(n-2k)p(k).} == Congruences == Srinivasa Ramanujan is credited with discovering that the partition function has nontrivial patterns in modular arithmetic. For instance the number of partitions is divisible by five whenever the decimal representation of n {\displaystyle n} ends in the digit 4 or 9, as expressed by the congruence p ( 5 k + 4 ) ≡ 0 ( mod 5 ) {\displaystyle p(5k+4)\equiv 0{\pmod {5}}} For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This congruence is implied by the more general identity ∑ k = 0 ∞ p ( 5 k + 4 ) x k = 5 ( x 5 ) ∞ 5 ( x ) ∞ 6 , {\displaystyle \sum _{k=0}^{\infty }p(5k+4)x^{k}=5~{\frac {(x^{5})_{\infty }^{5}}{(x)_{\infty }^{6}}},} also by Ramanujan, where the notation ( x ) ∞ {\displaystyle (x)_{\infty }} denotes the product defined by ( x ) ∞ = ∏ m = 1 ∞ ( 1 − x m ) . {\displaystyle (x)_{\infty }=\prod _{m=1}^{\infty }(1-x^{m}).} A short proof of this result can be obtained from the partition function generating function. Ramanujan also discovered congruences modulo 7 and 11: p ( 7 k + 5 ) ≡ 0 ( mod 7 ) , p ( 11 k + 6 ) ≡ 0 ( mod 11 ) . {\displaystyle {\begin{aligned}p(7k+5)&\equiv 0{\pmod {7}},\\p(11k+6)&\equiv 0{\pmod {11}}.\end{aligned}}} The first one comes from Ramanujan's identity ∑ k = 0 ∞ p ( 7 k + 5 ) x k = 7 ( x 7 ) ∞ 3 ( x ) ∞ 4 + 49 x ( x 7 ) ∞ 7 ( x ) ∞ 8 . {\displaystyle \sum _{k=0}^{\infty }p(7k+5)x^{k}=7~{\frac {(x^{7})_{\infty }^{3}}{(x)_{\infty }^{4}}}+49x~{\frac {(x^{7})_{\infty }^{7}}{(x)_{\infty }^{8}}}.} Since 5, 7, and 11 are consecutive primes, one might think that there would be an analogous congruence for the next prime 13, p ( 13 k + a ) ≡ 0 ( mod 13 ) {\displaystyle p(13k+a)\equiv 0{\pmod {13}}} for some a. However, there is no congruence of the form p ( b k + a ) ≡ 0 ( mod b ) {\displaystyle p(bk+a)\equiv 0{\pmod {b}}} for any prime b other than 5, 7, or 11. Instead, to obtain a congruence, the argument of p {\displaystyle p} should take the form c b k + a {\displaystyle cbk+a} for some c > 1 {\displaystyle c>1} . In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences of this form for small prime moduli. For example: p ( 11 3 ⋅ 13 ⋅ k + 237 ) ≡ 0 ( mod 13 ) . {\displaystyle p(11^{3}\cdot 13\cdot k+237)\equiv 0{\pmod {13}}.} Ken Ono (2000) proved that there are such congruences for every prime modulus greater than 3. Later, Ahlgren & Ono (2001) showed there are partition congruences modulo every integer coprime to 6. == Approximation formulas == Approximation formulas exist that are faster to calculate than the exact formula given above. An asymptotic expression for p(n) is given by p ( n ) ∼ 1 4 n 3 exp ⁡ ( π 2 n 3 ) {\displaystyle p(n)\sim {\frac {1}{4n{\sqrt {3}}}}\exp \left({\pi {\sqrt {\frac {2n}{3}}}}\right)} as n → ∞ {\displaystyle n\to \infty } . This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering p ( 1000 ) {\displaystyle p(1000)} , the asymptotic formula gives about 2.4402 × 10 31 {\displaystyle 2.4402\times 10^{31}} , reasonably close to the exact answer given above (1.415% larger than the true value). Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term: p ( n ) ∼ 1 2 π 2 ∑ k = 1 v A k ( n ) k ⋅ d d n ( 1 n − 1 24 exp ⁡ [ π k 2 3 ( n − 1 24 ) ] ) , {\displaystyle p(n)\sim {\frac {1}{2\pi {\sqrt {2}}}}\sum _{k=1}^{v}A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\exp \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right),} where A k ( n ) = ∑ 0 ≤ m < k , ( m , k ) = 1 e π i ( s ( m , k ) − 2 n m / k ) . {\displaystyle A_{k}(n)=\sum _{0\leq m<k,\;(m,k)=1}e^{\pi i\left(s(m,k)-2nm/k\right)}.} Here, the notation ( m , k ) = 1 {\displaystyle (m,k)=1} means that the sum is taken only over the values of m {\displaystyle m} that are relatively prime to k {\displaystyle k} . The function s ( m , k ) {\displaystyle s(m,k)} is a Dedekind sum. The error after v {\displaystyle v} terms is of the order of the next term, and v {\displaystyle v} may be taken to be of the order of n {\displaystyle {\sqrt {n}}} . As an example, Hardy and Ramanujan showed that p ( 200 ) {\displaystyle p(200)} is the nearest integer to the sum of the first v = 5 {\displaystyle v=5} terms of the series. In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for p ( n ) {\displaystyle p(n)} . It is p ( n ) = 1 π 2 ∑ k = 1 ∞ A k ( n ) k ⋅ d d n ( 1 n − 1 24 sinh ⁡ [ π k 2 3 ( n − 1 24 ) ] ) . {\displaystyle p(n)={\frac {1}{\pi {\sqrt {2}}}}\sum _{k=1}^{\infty }A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\sinh \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right).} The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function. It may be shown that the k {\displaystyle k} th term of Rademacher's series is of the order exp ⁡ ( π k 2 n 3 ) , {\displaystyle \exp \left({\frac {\pi }{k}}{\sqrt {\frac {2n}{3}}}\right),} so that the first term gives the Hardy–Ramanujan asymptotic approximation. Paul Erdős (1942) published an elementary proof of the asymptotic formula for p ( n ) {\displaystyle p(n)} . Techniques for implementing the Hardy–Ramanujan–Rademacher formula efficiently on a computer are discussed by Johansson (2012), who shows that p ( n ) {\displaystyle p(n)} can be computed in time O ( n 1 / 2 + ε ) {\displaystyle O(n^{1/2+\varepsilon })} for any ε > 0 {\displaystyle \varepsilon >0} . This is near-optimal in that it matches the number of digits of the result. The largest value of the partition function computed exactly is p ( 10 20 ) {\displaystyle p(10^{20})} , which has slightly more than 11 billion digits. == Strict partition function == === Definition and properties === A partition in which no part occurs more than once is called strict, or is said to be a partition into distinct parts. The function q(n) gives the number of these strict partitions of the given sum n. For example, q(3) = 2 because the partitions 3 and 1 + 2 are strict, while the third partition 1 + 1 + 1 of 3 has repeated parts. The number q(n) is also equal to the number of partitions of n in which only odd summands are permitted. === Generating function === The generating function for the numbers q(n) is given by a simple infinite product: ∑ n = 0 ∞ q ( n ) x n = ∏ k = 1 ∞ ( 1 + x k ) = ( x ; x 2 ) ∞ − 1 , {\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=\prod _{k=1}^{\infty }(1+x^{k})=(x;x^{2})_{\infty }^{-1},} where the notation ( a ; b ) ∞ {\displaystyle (a;b)_{\infty }} represents the Pochhammer symbol ( a ; b ) ∞ = ∏ k = 0 ∞ ( 1 − a b k ) . {\displaystyle (a;b)_{\infty }=\prod _{k=0}^{\infty }(1-ab^{k}).} From this formula, one may easily obtain the first few terms (sequence A000009 in the OEIS): ∑ n = 0 ∞ q ( n ) x n = 1 + 1 x + 1 x 2 + 2 x 3 + 2 x 4 + 3 x 5 + 4 x 6 + 5 x 7 + 6 x 8 + 8 x 9 + 10 x 10 + … . {\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=1+1x+1x^{2}+2x^{3}+2x^{4}+3x^{5}+4x^{6}+5x^{7}+6x^{8}+8x^{9}+10x^{10}+\ldots .} This series may also be written in terms of theta functions as ∑ n = 0 ∞ q ( n ) x n = ϑ 00 ( x ) 1 / 6 ϑ 01 ( x ) − 1 / 3 { 1 16 x [ ϑ 00 ( x ) 4 − ϑ 01 ( x ) 4 ] } 1 / 24 , {\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=\vartheta _{00}(x)^{1/6}\vartheta _{01}(x)^{-1/3}{\biggl \{}{\frac {1}{16\,x}}{\bigl [}\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}{\bigr ]}{\biggr \}}^{1/24},} where ϑ 00 ( x ) = 1 + 2 ∑ n = 1 ∞ x n 2 {\displaystyle \vartheta _{00}(x)=1+2\sum _{n=1}^{\infty }x^{n^{2}}} and ϑ 01 ( x ) = 1 + 2 ∑ n = 1 ∞ ( − 1 ) n x n 2 . {\displaystyle \vartheta _{01}(x)=1+2\sum _{n=1}^{\infty }(-1)^{n}x^{n^{2}}.} In comparison, the generating function of the regular partition numbers p(n) has this identity with respect to the theta function: ∑ n = 0 ∞ p ( n ) x n = ( x ; x ) ∞ − 1 = ϑ 00 ( x ) − 1 / 6 ϑ 01 ( x ) − 2 / 3 { 1 16 x [ ϑ 00 ( x ) 4 − ϑ 01 ( x ) 4 ] } − 1 / 24 . {\displaystyle \sum _{n=0}^{\infty }p(n)x^{n}=(x;x)_{\infty }^{-1}=\vartheta _{00}(x)^{-1/6}\vartheta _{01}(x)^{-2/3}{\biggl \{}{\frac {1}{16\,x}}{\bigl [}\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}{\bigr ]}{\biggr \}}^{-1/24}.} === Identities about strict partition numbers === Following identity is valid for the Pochhammer products: ( x ; x ) ∞ − 1 = ( x 2 ; x 2 ) ∞ − 1 ( x ; x 2 ) ∞ − 1 {\displaystyle (x;x)_{\infty }^{-1}=(x^{2};x^{2})_{\infty }^{-1}(x;x^{2})_{\infty }^{-1}} From this identity follows that formula: [ ∑ n = 0 ∞ p ( n ) x n ] = [ ∑ n = 0 ∞ p ( n ) x 2 n ] [ ∑ n = 0 ∞ q ( n ) x n ] {\displaystyle {\biggl [}\sum _{n=0}^{\infty }p(n)x^{n}{\biggr ]}={\biggl [}\sum _{n=0}^{\infty }p(n)x^{2n}{\biggr ]}{\biggl [}\sum _{n=0}^{\infty }q(n)x^{n}{\biggr ]}} Therefore those two formulas are valid for the synthesis of the number sequence p(n): p ( 2 n ) = ∑ k = 0 n p ( n − k ) q ( 2 k ) {\displaystyle p(2n)=\sum _{k=0}^{n}p(n-k)q(2k)} p ( 2 n + 1 ) = ∑ k = 0 n p ( n − k ) q ( 2 k + 1 ) {\displaystyle p(2n+1)=\sum _{k=0}^{n}p(n-k)q(2k+1)} In the following, two examples are accurately executed: p ( 8 ) = ∑ k = 0 4 p ( 4 − k ) q ( 2 k ) = {\displaystyle p(8)=\sum _{k=0}^{4}p(4-k)q(2k)=} = p ( 4 ) q ( 0 ) + p ( 3 ) q ( 2 ) + p ( 2 ) q ( 4 ) + p ( 1 ) q ( 6 ) + p ( 0 ) q ( 8 ) = {\displaystyle =p(4)q(0)+p(3)q(2)+p(2)q(4)+p(1)q(6)+p(0)q(8)=} = 5 × 1 + 3 × 1 + 2 × 2 + 1 × 4 + 1 × 6 = 22 {\displaystyle =5\times 1+3\times 1+2\times 2+1\times 4+1\times 6=22} p ( 9 ) = ∑ k = 0 4 p ( 4 − k ) q ( 2 k + 1 ) = {\displaystyle p(9)=\sum _{k=0}^{4}p(4-k)q(2k+1)=} = p ( 4 ) q ( 1 ) + p ( 3 ) q ( 3 ) + p ( 2 ) q ( 5 ) + p ( 1 ) q ( 7 ) + p ( 0 ) q ( 9 ) = {\displaystyle =p(4)q(1)+p(3)q(3)+p(2)q(5)+p(1)q(7)+p(0)q(9)=} = 5 × 1 + 3 × 2 + 2 × 3 + 1 × 5 + 1 × 8 = 30 {\displaystyle =5\times 1+3\times 2+2\times 3+1\times 5+1\times 8=30} == Restricted partition function == More generally, it is possible to consider partitions restricted to only elements of a subset A of the natural numbers (for example a restriction on the maximum value of the parts), or with a restriction on the number of parts or the maximum difference between parts. Each particular restriction gives rise to an associated partition function with specific properties. Some common examples are given below. === Euler and Glaisher's theorem === Two important examples are the partitions restricted to only odd integer parts or only even integer parts, with the corresponding partition functions often denoted p o ( n ) {\displaystyle p_{o}(n)} and p e ( n ) {\displaystyle p_{e}(n)} . A theorem from Euler shows that the number of strict partitions is equal to the number of partitions with only odd parts: for all n, q ( n ) = p o ( n ) {\displaystyle q(n)=p_{o}(n)} . This is generalized as Glaisher's theorem, which states that the number of partitions with no more than d-1 repetitions of any part is equal to the number of partitions with no part divisible by d. === Gaussian binomial coefficient === If we denote p ( N , M , n ) {\displaystyle p(N,M,n)} the number of partitions of n in at most M parts, with each part smaller or equal to N, then the generating function of p ( N , M , n ) {\displaystyle p(N,M,n)} is the following Gaussian binomial coefficient: ∑ n = 0 ∞ p ( N , M , n ) q n = ( N + M M ) q = ( 1 − q N + M ) ( 1 − q N + M − 1 ) ⋯ ( 1 − q N + 1 ) ( 1 − q ) ( 1 − q 2 ) ⋯ ( 1 − q M ) {\displaystyle \sum _{n=0}^{\infty }p(N,M,n)q^{n}={N+M \choose M}_{q}={\frac {(1-q^{N+M})(1-q^{N+M-1})\cdots (1-q^{N+1})}{(1-q)(1-q^{2})\cdots (1-q^{M})}}} === Asymptotics === Some general results on the asymptotic properties of restricted partition functions are known. If pA(n) is the partition function of partitions restricted to only elements of a subset A of the natural numbers, then: If A possesses positive natural density α then log ⁡ p A ( n ) ∼ C α n {\displaystyle \log p_{A}(n)\sim C{\sqrt {\alpha n}}} , with C = π 2 3 {\displaystyle C=\pi {\sqrt {\frac {2}{3}}}} and conversely if this asymptotic property holds for pA(n) then A has natural density α. This result was stated, with a sketch of proof, by Erdős in 1942. If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements whose greatest common divisor is 1, then p A ( n ) = ( ∏ a ∈ A a − 1 ) ⋅ n k − 1 ( k − 1 ) ! + O ( n k − 2 ) . {\displaystyle p_{A}(n)=\left(\prod _{a\in A}a^{-1}\right)\cdot {\frac {n^{k-1}}{(k-1)!}}+O(n^{k-2}).} == References == == External links == First 4096 values of the partition function
Wikipedia/Partition_function_(number_theory)
In algebra and number theory, a distribution is a function on a system of finite sets into an abelian group which is analogous to an integral: it is thus the algebraic analogue of a distribution in the sense of generalised function. The original examples of distributions occur, unnamed, as functions φ on Q/Z satisfying ∑ r = 0 N − 1 ϕ ( x + r N ) = ϕ ( N x ) . {\displaystyle \sum _{r=0}^{N-1}\phi \left(x+{\frac {r}{N}}\right)=\phi (Nx)\ .} Such distributions are called ordinary distributions. They also occur in p-adic integration theory in Iwasawa theory. Let ... → Xn+1 → Xn → ... be a projective system of finite sets with surjections, indexed by the natural numbers, and let X be their projective limit. We give each Xn the discrete topology, so that X is compact. Let φ = (φn) be a family of functions on Xn taking values in an abelian group V and compatible with the projective system: w ( m , n ) ∑ y ↦ x ϕ ( y ) = ϕ ( x ) {\displaystyle w(m,n)\sum _{y\mapsto x}\phi (y)=\phi (x)} for some weight function w. The family φ is then a distribution on the projective system X. A function f on X is "locally constant", or a "step function" if it factors through some Xn. We can define an integral of a step function against φ as ∫ f d ϕ = ∑ x ∈ X n f ( x ) ϕ n ( x ) . {\displaystyle \int f\,d\phi =\sum _{x\in X_{n}}f(x)\phi _{n}(x)\ .} The definition extends to more general projective systems, such as those indexed by the positive integers ordered by divisibility. As an important special case consider the projective system Z/n‌Z indexed by positive integers ordered by divisibility. We identify this with the system (1/n)Z/Z with limit Q/Z. For x in R we let ⟨x⟩ denote the fractional part of x normalised to 0 ≤ ⟨x⟩ < 1, and let {x} denote the fractional part normalised to 0 < {x} ≤ 1. == Examples == === Hurwitz zeta function === The multiplication theorem for the Hurwitz zeta function ζ ( s , a ) = ∑ n = 0 ∞ ( n + a ) − s {\displaystyle \zeta (s,a)=\sum _{n=0}^{\infty }(n+a)^{-s}} gives a distribution relation ∑ p = 0 q − 1 ζ ( s , a + p / q ) = q s ζ ( s , q a ) . {\displaystyle \sum _{p=0}^{q-1}\zeta (s,a+p/q)=q^{s}\,\zeta (s,qa)\ .} Hence for given s, the map t ↦ ζ ( s , { t } ) {\displaystyle t\mapsto \zeta (s,\{t\})} is a distribution on Q/Z. === Bernoulli distribution === Recall that the Bernoulli polynomials Bn are defined by B n ( x ) = ∑ k = 0 n ( n n − k ) b k x n − k , {\displaystyle B_{n}(x)=\sum _{k=0}^{n}{n \choose n-k}b_{k}x^{n-k}\ ,} for n ≥ 0, where bk are the Bernoulli numbers, with generating function t e x t e t − 1 = ∑ n = 0 ∞ B n ( x ) t n n ! . {\displaystyle {\frac {te^{xt}}{e^{t}-1}}=\sum _{n=0}^{\infty }B_{n}(x){\frac {t^{n}}{n!}}\ .} They satisfy the distribution relation B k ( x ) = n k − 1 ∑ a = 0 n − 1 b k ( x + a n ) . {\displaystyle B_{k}(x)=n^{k-1}\sum _{a=0}^{n-1}b_{k}\left({\frac {x+a}{n}}\right)\ .} Thus the map ϕ n : 1 n Z / Z → Q {\displaystyle \phi _{n}:{\frac {1}{n}}\mathbb {Z} /\mathbb {Z} \rightarrow \mathbb {Q} } defined by ϕ n : x ↦ n k − 1 B k ( ⟨ x ⟩ ) {\displaystyle \phi _{n}:x\mapsto n^{k-1}B_{k}(\langle x\rangle )} is a distribution. === Cyclotomic units === The cyclotomic units satisfy distribution relations. Let a be an element of Q/Z prime to p and let ga denote exp(2πia)−1. Then for a≠ 0 we have ∏ p b = a g b = g a . {\displaystyle \prod _{pb=a}g_{b}=g_{a}\ .} == Universal distribution == One considers the distributions on Z with values in some abelian group V and seek the "universal" or most general distribution possible. == Stickelberger distributions == Let h be an ordinary distribution on Q/Z taking values in a field F. Let G(N) denote the multiplicative group of Z/N‌Z, and for any function f on G(N) we extend f to a function on Z/N‌Z by taking f to be zero off G(N). Define an element of the group algebra F[G(N)] by g N ( r ) = 1 | G ( N ) | ∑ a ∈ G ( N ) h ( ⟨ r a N ⟩ ) σ a − 1 . {\displaystyle g_{N}(r)={\frac {1}{|G(N)|}}\sum _{a\in G(N)}h\left({\left\langle {\frac {ra}{N}}\right\rangle }\right)\sigma _{a}^{-1}\ .} The group algebras form a projective system with limit X. Then the functions gN form a distribution on Q/Z with values in X, the Stickelberger distribution associated with h. == p-adic measures == Consider the special case when the value group V of a distribution φ on X takes values in a local field K, finite over Qp, or more generally, in a finite-dimensional p-adic Banach space W over K, with valuation |·|. We call φ a measure if |φ| is bounded on compact open subsets of X. Let D be the ring of integers of K and L a lattice in W, that is, a free D-submodule of W with K⊗L = W. Up to scaling a measure may be taken to have values in L. === Hecke operators and measures === Let D be a fixed integer prime to p and consider ZD, the limit of the system Z/pnD. Consider any eigenfunction of the Hecke operator Tp with eigenvalue λp prime to p. We describe a procedure for deriving a measure of ZD. Fix an integer N prime to p and to D. Let F be the D-module of all functions on rational numbers with denominator coprime to N. For any prime l not dividing N we define the Hecke operator Tl by ( T l f ) ( a b ) = f ( l a b ) + ∑ k = 0 l − 1 f ( a + k b l b ) − ∑ k = 0 l − 1 f ( k l ) . {\displaystyle (T_{l}f)\left({\frac {a}{b}}\right)=f\left({\frac {la}{b}}\right)+\sum _{k=0}^{l-1}f\left({\frac {a+kb}{lb}}\right)-\sum _{k=0}^{l-1}f\left({\frac {k}{l}}\right)\ .} Let f be an eigenfunction for Tp with eigenvalue λp in D. The quadratic equation X2 − λpX + p = 0 has roots π1, π2 with π1 a unit and π2 divisible by p. Define a sequence a0 = 2, a1 = π1+π2 = λp and a k + 2 = λ p a k + 1 − p a k , {\displaystyle a_{k+2}=\lambda _{p}a_{k+1}-pa_{k}\ ,} so that a k = π 1 k + π 2 k . {\displaystyle a_{k}=\pi _{1}^{k}+\pi _{2}^{k}\ .} == References == Kubert, Daniel S.; Lang, Serge (1981). Modular Units. Grundlehren der Mathematischen Wissenschaften. Vol. 244. Springer-Verlag. ISBN 0-387-90517-0. Zbl 0492.12002. Lang, Serge (1990). Cyclotomic Fields I and II. Graduate Texts in Mathematics. Vol. 121 (second combined ed.). Springer Verlag. ISBN 3-540-96671-4. Zbl 0704.11038. Mazur, B.; Swinnerton-Dyer, P. (1974). "Arithmetic of Weil curves". Inventiones Mathematicae. 25: 1–61. doi:10.1007/BF01389997. Zbl 0281.14016.
Wikipedia/Distribution_(number_theory)
In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions. Typically, one shows that if a solution to a problem existed, which in some sense was related to one or more natural numbers, it would necessarily imply that a second solution existed, which was related to one or more 'smaller' natural numbers. This in turn would imply a third solution related to smaller natural numbers, implying a fourth solution, therefore a fifth solution, and so on. However, there cannot be an infinity of ever-smaller natural numbers, and therefore by mathematical induction, the original premise—that any solution exists—is incorrect: its correctness produces a contradiction. An alternative way to express this is to assume one or more solutions or examples exists, from which a smallest solution or example—a minimal counterexample—can then be inferred. Once there, one would try to prove that if a smallest solution exists, then it must imply the existence of a smaller solution (in some sense), which again proves that the existence of any solution would lead to a contradiction. The earliest uses of the method of infinite descent appear in Euclid's Elements. A typical example is Proposition 31 of Book 7, in which Euclid proves that every composite integer is divided (in Euclid's terminology "measured") by some prime number. The method was much later developed by Fermat, who coined the term and often used it for Diophantine equations. Two typical examples are showing the non-solvability of the Diophantine equation r 2 + s 4 = t 4 {\displaystyle r^{2}+s^{4}=t^{4}} and proving Fermat's theorem on sums of two squares, which states that an odd prime p can be expressed as a sum of two squares when p ≡ 1 ( mod 4 ) {\displaystyle p\equiv 1{\pmod {4}}} (see Modular arithmetic and proof by infinite descent). In this way Fermat was able to show the non-existence of solutions in many cases of Diophantine equations of classical interest (for example, the problem of four perfect squares in arithmetic progression). In some cases, to the modern eye, his "method of infinite descent" is an exploitation of the inversion of the doubling function for rational points on an elliptic curve E. The context is of a hypothetical non-trivial rational point on E. Doubling a point on E roughly doubles the length of the numbers required to write it (as number of digits), so that "halving" a point gives a rational with smaller terms. Since the terms are positive, they cannot decrease forever. == Number theory == In the number theory of the twentieth century, the infinite descent method was taken up again, and pushed to a point where it connected with the main thrust of algebraic number theory and the study of L-functions. The structural result of Mordell, that the rational points on an elliptic curve E form a finitely-generated abelian group, used an infinite descent argument based on E/2E in Fermat's style. To extend this to the case of an abelian variety A, André Weil had to make more explicit the way of quantifying the size of a solution, by means of a height function – a concept that became foundational. To show that A(Q)/2A(Q) is finite, which is certainly a necessary condition for the finite generation of the group A(Q) of rational points of A, one must do calculations in what later was recognised as Galois cohomology. In this way, abstractly-defined cohomology groups in the theory become identified with descents in the tradition of Fermat. The Mordell–Weil theorem was at the start of what later became a very extensive theory. == Application examples == === Irrationality of √2 === The proof that the square root of 2 (√2) is irrational (i.e. cannot be expressed as a fraction of two whole numbers) was discovered by the ancient Greeks, and is perhaps the earliest known example of a proof by infinite descent. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. The square root of two is occasionally called "Pythagoras' number" or "Pythagoras' Constant", for example Conway & Guy (1996). The ancient Greeks, not having algebra, worked out a geometric proof by infinite descent (John Horton Conway presented another geometric proof by infinite descent that may be more accessible). The following is an algebraic proof along similar lines: Suppose that √2 were rational. Then it could be written as 2 = p q {\displaystyle {\sqrt {2}}={\frac {p}{q}}} for two natural numbers, p and q. Then squaring would give 2 = p 2 q 2 , {\displaystyle 2={\frac {p^{2}}{q^{2}}},} 2 q 2 = p 2 , {\displaystyle 2q^{2}=p^{2},} so 2 must divide p2. Because 2 is a prime number, it must also divide p, by Euclid's lemma. So p = 2r, for some integer r. But then, 2 q 2 = ( 2 r ) 2 = 4 r 2 , {\displaystyle 2q^{2}=(2r)^{2}=4r^{2},} q 2 = 2 r 2 , {\displaystyle q^{2}=2r^{2},} which shows that 2 must divide q as well. So q = 2s for some integer s. This gives p q = 2 r 2 s = r s {\displaystyle {\frac {p}{q}}={\frac {2r}{2s}}={\frac {r}{s}}} . Therefore, if √2 could be written as a rational number, then it could always be written as a rational number with smaller parts, which itself could be written with yet-smaller parts, ad infinitum. But this is impossible in the set of natural numbers. Since √2 is a real number, which can be either rational or irrational, the only option left is for √2 to be irrational. (Alternatively, this proves that if √2 were rational, no "smallest" representation as a fraction could exist, as any attempt to find a "smallest" representation p/q would imply that a smaller one existed, which is a similar contradiction.) === Irrationality of √k if it is not an integer === For positive integer k, suppose that √k is not an integer, but is rational and can be expressed as ⁠m/n⁠ for natural numbers m and n, and let q be the largest integer less than √k (that is, q is the floor of √k). Then k = m n = m ( k − q ) n ( k − q ) = m k − m q n k − n q = ( n k ) k − m q n ( m n ) − n q = n k − m q m − n q {\displaystyle {\begin{aligned}{\sqrt {k}}&={\frac {m}{n}}\\[6pt]&={\frac {m\left({\sqrt {k}}-q\right)}{n\left({\sqrt {k}}-q\right)}}\\[6pt]&={\frac {m{\sqrt {k}}-mq}{n{\sqrt {k}}-nq}}\\[6pt]&={\frac {\left(n{\sqrt {k}}\right){\sqrt {k}}-mq}{n\left({\frac {m}{n}}\right)-nq}}\\[6pt]&={\frac {nk-mq}{m-nq}}\end{aligned}}} The numerator and denominator were each multiplied by the expression (√k − q)—which is positive but less than 1—and then simplified independently. So, the resulting products, say m′ and n′, are themselves integers, and are less than m and n respectively. Therefore, no matter what natural numbers m and n are used to express √k, there exist smaller natural numbers m′ < m and n′ < n that have the same ratio. But infinite descent on the natural numbers is impossible, so this disproves the original assumption that √k could be expressed as a ratio of natural numbers. === Non-solvability of r2 + s4 = t4 and its permutations === The non-solvability of r 2 + s 4 = t 4 {\displaystyle r^{2}+s^{4}=t^{4}} in integers is sufficient to show the non-solvability of q 4 + s 4 = t 4 {\displaystyle q^{4}+s^{4}=t^{4}} in integers, which is a special case of Fermat's Last Theorem, and the historical proofs of the latter proceeded by more broadly proving the former using infinite descent. The following more recent proof demonstrates both of these impossibilities by proving still more broadly that a Pythagorean triangle cannot have any two of its sides each either a square or twice a square, since there is no smallest such triangle: Suppose there exists such a Pythagorean triangle. Then it can be scaled down to give a primitive (i.e., with no common factors other than 1) Pythagorean triangle with the same property. Primitive Pythagorean triangles' sides can be written as x = 2 a b , {\displaystyle x=2ab,} y = a 2 − b 2 , {\displaystyle y=a^{2}-b^{2},} z = a 2 + b 2 {\displaystyle z=a^{2}+b^{2}} , with a and b relatively prime and with a+b odd and hence y and z both odd. The property that y and z are each odd means that neither y nor z can be twice a square. Furthermore, if x is a square or twice a square, then each of a and b is a square or twice a square. There are three cases, depending on which two sides are postulated to each be a square or twice a square: y and z: In this case, y and z are both squares. But then the right triangle with legs y z {\displaystyle {\sqrt {yz}}} and b 2 {\displaystyle b^{2}} and hypotenuse a 2 {\displaystyle a^{2}} also would have integer sides including a square leg ( b 2 {\displaystyle b^{2}} ) and a square hypotenuse ( a 2 {\displaystyle a^{2}} ), and would have a smaller hypotenuse ( a 2 {\displaystyle a^{2}} compared to z = a 2 + b 2 {\displaystyle z=a^{2}+b^{2}} ). z and x: z is a square. The integer right triangle with legs a {\displaystyle a} and b {\displaystyle b} and hypotenuse z {\displaystyle {\sqrt {z}}} also would have two sides ( a {\displaystyle a} and b {\displaystyle b} ) each of which is a square or twice a square, and a smaller hypotenuse ( z {\displaystyle {\sqrt {z}}} compared to z {\displaystyle z} ). y and x: y is a square. The integer right triangle with legs b {\displaystyle b} and y {\displaystyle {\sqrt {y}}} and hypotenuse a {\displaystyle a} would have two sides (b and a) each of which is a square or twice a square, with a smaller hypotenuse than the original triangle ( a {\displaystyle a} compared to z = a 2 + b 2 {\displaystyle z=a^{2}+b^{2}} ). In any of these cases, one Pythagorean triangle with two sides each of which is a square or twice a square has led to a smaller one, which in turn would lead to a smaller one, etc.; since such a sequence cannot go on infinitely, the original premise that such a triangle exists must be wrong. This implies that the equations r 2 + s 4 = t 4 , {\displaystyle r^{2}+s^{4}=t^{4},} r 4 + s 2 = t 4 , {\displaystyle r^{4}+s^{2}=t^{4},} and r 4 + s 4 = t 2 {\displaystyle r^{4}+s^{4}=t^{2}} cannot have non-trivial solutions, since non-trivial solutions would give Pythagorean triangles with two sides being squares. For other similar proofs by infinite descent for the n = 4 case of Fermat's Theorem, see the articles by Grant and Perella and Barbara. == See also == Vieta jumping == References == == Further reading == Infinite descent at PlanetMath. Example of Fermat's last theorem at PlanetMath.
Wikipedia/Method_of_infinite_descent
In algebra and number theory, Euclid's lemma is a lemma that captures a fundamental property of prime numbers: For example, if p = 19, a = 133, b = 143, then ab = 133 × 143 = 19019, and since this is divisible by 19, the lemma implies that one or both of 133 or 143 must be as well. In fact, 133 = 19 × 7. The lemma first appeared in Euclid's Elements, and is a fundamental result in elementary number theory. If the premise of the lemma does not hold, that is, if p is a composite number, its consequent may be either true or false. For example, in the case of p = 10, a = 4, b = 15, composite number 10 divides ab = 4 × 15 = 60, but 10 divides neither 4 nor 15. This property is the key in the proof of the fundamental theorem of arithmetic. It is used to define prime elements, a generalization of prime numbers to arbitrary commutative rings. Euclid's lemma shows that in the integers irreducible elements are also prime elements. The proof uses induction so it does not apply to all integral domains. == Formulations == Euclid's lemma is commonly used in the following equivalent form: Euclid's lemma can be generalized as follows from prime numbers to any integers. This is a generalization because a prime number p is coprime with an integer a if and only if p does not divide a. == History == The lemma first appears as proposition 30 in Book VII of Euclid's Elements. It is included in practically every book that covers elementary number theory. The generalization of the lemma to integers appeared in Jean Prestet's textbook Nouveaux Elémens de Mathématiques in 1681. In Carl Friedrich Gauss's treatise Disquisitiones Arithmeticae, the statement of the lemma is Euclid's Proposition 14 (Section 2), which he uses to prove the uniqueness of the decomposition product of prime factors of an integer (Theorem 16), admitting the existence as "obvious". From this existence and uniqueness he then deduces the generalization of prime numbers to integers. For this reason, the generalization of Euclid's lemma is sometimes referred to as Gauss's lemma, but some believe this usage is incorrect due to confusion with Gauss's lemma on quadratic residues. == Proofs == The two first subsections, are proofs of the generalized version of Euclid's lemma, namely that: if n divides ab and is coprime with a then it divides b. The original Euclid's lemma follows immediately, since, if n is prime then it divides a or does not divide a in which case it is coprime with a so per the generalized version it divides b. === Using Bézout's identity === In modern mathematics, a common proof involves Bézout's identity, which was unknown at Euclid's time. Bézout's identity states that if x and y are coprime integers (i.e. they share no common divisors other than 1 and −1) there exist integers r and s such that r x + s y = 1. {\displaystyle rx+sy=1.} Let a and n be coprime, and assume that n|ab. By Bézout's identity, there are r and s such that r n + s a = 1. {\displaystyle rn+sa=1.} Multiply both sides by b: r n b + s a b = b . {\displaystyle rnb+sab=b.} The first term on the left is divisible by n, and the second term is divisible by ab, which by hypothesis is divisible by n. Therefore their sum, b, is also divisible by n. === By induction === The following proof is inspired by Euclid's version of Euclidean algorithm, which proceeds by using only subtractions. Suppose that n ∣ a b {\displaystyle n\mid ab} and that n and a are coprime (that is, their greatest common divisor is 1). One has to prove that n divides b. Since n ∣ a b , {\displaystyle n\mid ab,} there exists an integer q such that n q = a b . {\displaystyle nq=ab.} Without loss of generality, one can suppose that n, q, a, and b are positive, since the divisibility relation is independent of the signs of the involved integers. To prove the theorem by strong induction, we suppose that it has been proved for all smaller values of ab. There are three cases: If n = a, coprimality implies n = 1, and n divides b trivially. If n < a, then subtracting n b {\displaystyle nb} from both sides gives n ( q − b ) = ( a − n ) b . {\displaystyle n(q-b)=(a-n)b.} Thus, n divides (a – n) b. Since we assumed that n and a are coprime, it follows that a – n and n must be coprime. (If not, their greatest common divisor d would divide their sum a as well as n, contradicting our assumption.) The conclusion therefore follows by induction hypothesis, since 0 < (a – n) b < ab. If n > a then subtracting a q {\displaystyle aq} from both sides gives ( n − a ) q = a ( b − q ) . {\displaystyle (n-a)q=a(b-q).} Thus, n - a divides a (b - q). Since (as in the previous case), n – a and a are coprime, and since 0 < b - q < b, then the induction hypothesis implies that n − a divides b − q; that is, b − q = r ( n − a ) {\displaystyle b-q=r(n-a)} for some integer r. So, ( n − a ) q = a r ( n − a ) , {\displaystyle (n-a)q=ar(n-a),} and, by dividing by n − a, one has q = a r . {\displaystyle q=ar.} Therefore, a b = n q = a n r , {\displaystyle ab=nq=anr,} and by dividing by a, one gets b = n r , {\displaystyle b=nr,} the desired conclusion. === Proof of Elements === Euclid's lemma is proved at the Proposition 30 in Book VII of Euclid's Elements. The original proof is difficult to understand as is, so we quote the commentary from Euclid (1956, pp. 319–332). Proposition 19 If four numbers be proportional, the number produced from the first and fourth is equal to the number produced from the second and third; and, if the number produced from the first and fourth be equal to that produced from the second and third, the four numbers are proportional. Proposition 20 The least numbers of those that have the same ratio with them measures those that have the same ratio the same number of times—the greater the greater and the less the less. Proposition 21 Numbers prime to one another are the least of those that have the same ratio with them. Proposition 29 Any prime number is prime to any number it does not measure. Proposition 30 If two numbers, by multiplying one another, make the same number, and any prime number measures the product, it also measures one of the original numbers. Proof of 30 If c, a prime number, measure ab, c measures either a or b.Suppose c does not measure a.Therefore c, a are prime to one another. [VII. 29]Suppose ab=mc.Therefore c : a = b : m. [VII. 19]Hence [VII. 20, 21] b=nc, where n is some integer.Therefore c measures b.Similarly, if c does not measure b, c measures a.Therefore c measures one or other of the two numbers a, b.Q.E.D. == See also == == Footnotes == === Notes === === Citations === == References == Bajnok, Béla (2013), An Invitation to Abstract Mathematics, Undergraduate Texts in Mathematics, Springer, ISBN 978-1-4614-6636-9. Euclid (1956), The Thirteen Books of the Elements, vol. 2 (Books III-IX), translated by Heath, Thomas Little, Dover Publications, ISBN 978-0-486-60089-5 {{citation}}: ISBN / Date incompatibility (help)- vol. 2 Euclid (1994), Les Éléments, traduction, commentaires et notes (in French), vol. 2, translated by Vitrac, Bernard, pp. 338–339, ISBN 2-13-045568-9 Gauss, Carl Friedrich (2001), Disquisitiones Arithmeticae, translated by Clarke, Arthur A. (Second, corrected ed.), New Haven, CT: Yale University Press, ISBN 978-0-300-09473-2 Gauss, Carl Friedrich (1981), Untersuchungen uber hohere Arithmetik [Investigations on higher arithmetic], translated by Maser, H. (Second ed.), New York: Chelsea, ISBN 978-0-8284-0191-3 Hardy, G. H.; Wright, E. M.; Wiles, A. J. (2008-09-15), An Introduction to the Theory of Numbers (6th ed.), Oxford: Oxford University Press, ISBN 978-0-19-921986-5 Ireland, Kenneth; Rosen, Michael (2010), A Classical Introduction to Modern Number Theory (Second ed.), New York: Springer, ISBN 978-1-4419-3094-1 Joyner, David; Kreminski, Richard; Turisco, Joann (2004), Applied Abstract Algebra, JHU Press, ISBN 978-0-8018-7822-0. Landau, Edmund (1999), Elementary Number Theory, translated by Goodman, J. E. (2nd ed.), Providence, Rhode Island: American Mathematical Society, ISBN 978-0-821-82004-9 Martin, G. E. (2012), The Foundations of Geometry and the Non-Euclidean Plane, Undergraduate Texts in Mathematics, Springer, ISBN 978-1-4612-5725-7. Riesel, Hans (1994), Prime Numbers and Computer Methods for Factorization (2nd ed.), Boston: Birkhäuser, ISBN 978-0-8176-3743-9. == External links == Weisstein, Eric W. "Euclid's Lemma". MathWorld.
Wikipedia/Euclid's_lemma
In mathematics, and specifically in number theory, a divisor function is an arithmetic function related to the divisors of an integer. When referred to as the divisor function, it counts the number of divisors of an integer (including 1 and the number itself). It appears in a number of remarkable identities, including relationships on the Riemann zeta function and the Eisenstein series of modular forms. Divisor functions were studied by Ramanujan, who gave a number of important congruences and identities; these are treated separately in the article Ramanujan's sum. A related function is the divisor summatory function, which, as the name implies, is a sum over the divisor function. == Definition == The sum of positive divisors function σz(n), for a real or complex number z, is defined as the sum of the zth powers of the positive divisors of n. It can be expressed in sigma notation as σ z ( n ) = ∑ d ∣ n d z , {\displaystyle \sigma _{z}(n)=\sum _{d\mid n}d^{z}\,\!,} where d ∣ n {\displaystyle {d\mid n}} is shorthand for "d divides n". The notations d(n), ν(n) and τ(n) (for the German Teiler = divisors) are also used to denote σ0(n), or the number-of-divisors function (OEIS: A000005). When z is 1, the function is called the sigma function or sum-of-divisors function, and the subscript is often omitted, so σ(n) is the same as σ1(n) (OEIS: A000203). The aliquot sum s(n) of n is the sum of the proper divisors (that is, the divisors excluding n itself, OEIS: A001065), and equals σ1(n) − n; the aliquot sequence of n is formed by repeatedly applying the aliquot sum function. == Example == For example, σ0(12) is the number of the divisors of 12: σ 0 ( 12 ) = 1 0 + 2 0 + 3 0 + 4 0 + 6 0 + 12 0 = 1 + 1 + 1 + 1 + 1 + 1 = 6 , {\displaystyle {\begin{aligned}\sigma _{0}(12)&=1^{0}+2^{0}+3^{0}+4^{0}+6^{0}+12^{0}\\&=1+1+1+1+1+1=6,\end{aligned}}} while σ1(12) is the sum of all the divisors: σ 1 ( 12 ) = 1 1 + 2 1 + 3 1 + 4 1 + 6 1 + 12 1 = 1 + 2 + 3 + 4 + 6 + 12 = 28 , {\displaystyle {\begin{aligned}\sigma _{1}(12)&=1^{1}+2^{1}+3^{1}+4^{1}+6^{1}+12^{1}\\&=1+2+3+4+6+12=28,\end{aligned}}} and the aliquot sum s(12) of proper divisors is: s ( 12 ) = 1 1 + 2 1 + 3 1 + 4 1 + 6 1 = 1 + 2 + 3 + 4 + 6 = 16. {\displaystyle {\begin{aligned}s(12)&=1^{1}+2^{1}+3^{1}+4^{1}+6^{1}\\&=1+2+3+4+6=16.\end{aligned}}} σ−1(n) is sometimes called the abundancy index of n, and we have: σ − 1 ( 12 ) = 1 − 1 + 2 − 1 + 3 − 1 + 4 − 1 + 6 − 1 + 12 − 1 = 1 1 + 1 2 + 1 3 + 1 4 + 1 6 + 1 12 = 12 12 + 6 12 + 4 12 + 3 12 + 2 12 + 1 12 = 12 + 6 + 4 + 3 + 2 + 1 12 = 28 12 = 7 3 = σ 1 ( 12 ) 12 {\displaystyle {\begin{aligned}\sigma _{-1}(12)&=1^{-1}+2^{-1}+3^{-1}+4^{-1}+6^{-1}+12^{-1}\\[6pt]&={\tfrac {1}{1}}+{\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{4}}+{\tfrac {1}{6}}+{\tfrac {1}{12}}\\[6pt]&={\tfrac {12}{12}}+{\tfrac {6}{12}}+{\tfrac {4}{12}}+{\tfrac {3}{12}}+{\tfrac {2}{12}}+{\tfrac {1}{12}}\\[6pt]&={\tfrac {12+6+4+3+2+1}{12}}={\tfrac {28}{12}}={\tfrac {7}{3}}={\tfrac {\sigma _{1}(12)}{12}}\end{aligned}}} == Table of values == The cases x = 2 to 5 are listed in OEIS: A001157 through OEIS: A001160, x = 6 to 24 are listed in OEIS: A013954 through OEIS: A013972. == Properties == === Formulas at prime powers === For a prime number p, σ 0 ( p ) = 2 σ 0 ( p n ) = n + 1 σ 1 ( p ) = p + 1 {\displaystyle {\begin{aligned}\sigma _{0}(p)&=2\\\sigma _{0}(p^{n})&=n+1\\\sigma _{1}(p)&=p+1\end{aligned}}} because by definition, the factors of a prime number are 1 and itself. Also, where pn# denotes the primorial, σ 0 ( p n # ) = 2 n {\displaystyle \sigma _{0}(p_{n}\#)=2^{n}} since n prime factors allow a sequence of binary selection ( p i {\displaystyle p_{i}} or 1) from n terms for each proper divisor formed. However, these are not in general the smallest numbers whose number of divisors is a power of two; instead, the smallest such number may be obtained by multiplying together the first n Fermi–Dirac primes, prime powers whose exponent is a power of two. Clearly, 1 < σ 0 ( n ) < n {\displaystyle 1<\sigma _{0}(n)<n} for all n > 2 {\displaystyle n>2} , and σ x ( n ) > n {\displaystyle \sigma _{x}(n)>n} for all n > 1 {\displaystyle n>1} , x > 0 {\displaystyle x>0} . The divisor function is multiplicative (since each divisor c of the product mn with gcd ( m , n ) = 1 {\displaystyle \gcd(m,n)=1} distinctively correspond to a divisor a of m and a divisor b of n), but not completely multiplicative: gcd ( a , b ) = 1 ⟹ σ x ( a b ) = σ x ( a ) σ x ( b ) . {\displaystyle \gcd(a,b)=1\Longrightarrow \sigma _{x}(ab)=\sigma _{x}(a)\sigma _{x}(b).} The consequence of this is that, if we write n = ∏ i = 1 r p i a i {\displaystyle n=\prod _{i=1}^{r}p_{i}^{a_{i}}} where r = ω(n) is the number of distinct prime factors of n, pi is the ith prime factor, and ai is the maximum power of pi by which n is divisible, then we have: σ x ( n ) = ∏ i = 1 r ∑ j = 0 a i p i j x = ∏ i = 1 r ( 1 + p i x + p i 2 x + ⋯ + p i a i x ) . {\displaystyle \sigma _{x}(n)=\prod _{i=1}^{r}\sum _{j=0}^{a_{i}}p_{i}^{jx}=\prod _{i=1}^{r}\left(1+p_{i}^{x}+p_{i}^{2x}+\cdots +p_{i}^{a_{i}x}\right).} which, when x ≠ 0, is equivalent to the useful formula: σ x ( n ) = ∏ i = 1 r p i ( a i + 1 ) x − 1 p i x − 1 . {\displaystyle \sigma _{x}(n)=\prod _{i=1}^{r}{\frac {p_{i}^{(a_{i}+1)x}-1}{p_{i}^{x}-1}}.} When x = 0, σ 0 ( n ) {\displaystyle \sigma _{0}(n)} is: σ 0 ( n ) = ∏ i = 1 r ( a i + 1 ) . {\displaystyle \sigma _{0}(n)=\prod _{i=1}^{r}(a_{i}+1).} This result can be directly deduced from the fact that all divisors of n {\displaystyle n} are uniquely determined by the distinct tuples ( x 1 , x 2 , . . . , x i , . . . , x r ) {\displaystyle (x_{1},x_{2},...,x_{i},...,x_{r})} of integers with 0 ≤ x i ≤ a i {\displaystyle 0\leq x_{i}\leq a_{i}} (i.e. a i + 1 {\displaystyle a_{i}+1} independent choices for each x i {\displaystyle x_{i}} ). For example, if n is 24, there are two prime factors (p1 is 2; p2 is 3); noting that 24 is the product of 23×31, a1 is 3 and a2 is 1. Thus we can calculate σ 0 ( 24 ) {\displaystyle \sigma _{0}(24)} as so: σ 0 ( 24 ) = ∏ i = 1 2 ( a i + 1 ) = ( 3 + 1 ) ( 1 + 1 ) = 4 ⋅ 2 = 8. {\displaystyle \sigma _{0}(24)=\prod _{i=1}^{2}(a_{i}+1)=(3+1)(1+1)=4\cdot 2=8.} The eight divisors counted by this formula are 1, 2, 4, 8, 3, 6, 12, and 24. === Other properties and identities === Euler proved the remarkable recurrence: σ 1 ( n ) = σ 1 ( n − 1 ) + σ 1 ( n − 2 ) − σ 1 ( n − 5 ) − σ 1 ( n − 7 ) + σ 1 ( n − 12 ) + σ 1 ( n − 15 ) + ⋯ = ∑ i ∈ N ( − 1 ) i + 1 ( σ 1 ( n − 1 2 ( 3 i 2 − i ) ) + σ 1 ( n − 1 2 ( 3 i 2 + i ) ) ) , {\displaystyle {\begin{aligned}\sigma _{1}(n)&=\sigma _{1}(n-1)+\sigma _{1}(n-2)-\sigma _{1}(n-5)-\sigma _{1}(n-7)+\sigma _{1}(n-12)+\sigma _{1}(n-15)+\cdots \\[12mu]&=\sum _{i\in \mathbb {N} }(-1)^{i+1}\left(\sigma _{1}\left(n-{\frac {1}{2}}\left(3i^{2}-i\right)\right)+\sigma _{1}\left(n-{\frac {1}{2}}\left(3i^{2}+i\right)\right)\right),\end{aligned}}} where σ 1 ( 0 ) = n {\displaystyle \sigma _{1}(0)=n} if it occurs and σ 1 ( x ) = 0 {\displaystyle \sigma _{1}(x)=0} for x < 0 {\displaystyle x<0} , and 1 2 ( 3 i 2 ∓ i ) {\displaystyle {\tfrac {1}{2}}\left(3i^{2}\mp i\right)} are consecutive pairs of generalized pentagonal numbers (OEIS: A001318, starting at offset 1). Indeed, Euler proved this by logarithmic differentiation of the identity in his pentagonal number theorem. For a non-square integer, n, every divisor, d, of n is paired with divisor n/d of n and σ 0 ( n ) {\displaystyle \sigma _{0}(n)} is even; for a square integer, one divisor (namely n {\displaystyle {\sqrt {n}}} ) is not paired with a distinct divisor and σ 0 ( n ) {\displaystyle \sigma _{0}(n)} is odd. Similarly, the number σ 1 ( n ) {\displaystyle \sigma _{1}(n)} is odd if and only if n is a square or twice a square. We also note s(n) = σ(n) − n. Here s(n) denotes the sum of the proper divisors of n, that is, the divisors of n excluding n itself. This function is used to recognize perfect numbers, which are the n such that s(n) = n. If s(n) > n, then n is an abundant number, and if s(n) < n, then n is a deficient number. If n is a power of 2, n = 2 k {\displaystyle n=2^{k}} , then σ ( n ) = 2 ⋅ 2 k − 1 = 2 n − 1 {\displaystyle \sigma (n)=2\cdot 2^{k}-1=2n-1} and s ( n ) = n − 1 {\displaystyle s(n)=n-1} , which makes n almost-perfect. As an example, for two primes p , q : p < q {\displaystyle p,q:p<q} , let n = p q {\displaystyle n=p\,q} . Then σ ( n ) = ( p + 1 ) ( q + 1 ) = n + 1 + ( p + q ) , {\displaystyle \sigma (n)=(p+1)(q+1)=n+1+(p+q),} φ ( n ) = ( p − 1 ) ( q − 1 ) = n + 1 − ( p + q ) , {\displaystyle \varphi (n)=(p-1)(q-1)=n+1-(p+q),} and n + 1 = ( σ ( n ) + φ ( n ) ) / 2 , {\displaystyle n+1=(\sigma (n)+\varphi (n))/2,} p + q = ( σ ( n ) − φ ( n ) ) / 2 , {\displaystyle p+q=(\sigma (n)-\varphi (n))/2,} where φ ( n ) {\displaystyle \varphi (n)} is Euler's totient function. Then, the roots of ( x − p ) ( x − q ) = x 2 − ( p + q ) x + n = x 2 − [ ( σ ( n ) − φ ( n ) ) / 2 ] x + [ ( σ ( n ) + φ ( n ) ) / 2 − 1 ] = 0 {\displaystyle (x-p)(x-q)=x^{2}-(p+q)x+n=x^{2}-[(\sigma (n)-\varphi (n))/2]x+[(\sigma (n)+\varphi (n))/2-1]=0} express p and q in terms of σ(n) and φ(n) only, requiring no knowledge of n or p + q {\displaystyle p+q} , as p = ( σ ( n ) − φ ( n ) ) / 4 − [ ( σ ( n ) − φ ( n ) ) / 4 ] 2 − [ ( σ ( n ) + φ ( n ) ) / 2 − 1 ] , {\displaystyle p=(\sigma (n)-\varphi (n))/4-{\sqrt {[(\sigma (n)-\varphi (n))/4]^{2}-[(\sigma (n)+\varphi (n))/2-1]}},} q = ( σ ( n ) − φ ( n ) ) / 4 + [ ( σ ( n ) − φ ( n ) ) / 4 ] 2 − [ ( σ ( n ) + φ ( n ) ) / 2 − 1 ] . {\displaystyle q=(\sigma (n)-\varphi (n))/4+{\sqrt {[(\sigma (n)-\varphi (n))/4]^{2}-[(\sigma (n)+\varphi (n))/2-1]}}.} Also, knowing n and either σ ( n ) {\displaystyle \sigma (n)} or φ ( n ) {\displaystyle \varphi (n)} , or, alternatively, p + q {\displaystyle p+q} and either σ ( n ) {\displaystyle \sigma (n)} or φ ( n ) {\displaystyle \varphi (n)} allows an easy recovery of p and q. In 1984, Roger Heath-Brown proved that the equality σ 0 ( n ) = σ 0 ( n + 1 ) {\displaystyle \sigma _{0}(n)=\sigma _{0}(n+1)} is true for infinitely many values of n, see OEIS: A005237. === Dirichlet convolutions === By definition: σ = Id ∗ 1 {\displaystyle \sigma =\operatorname {Id} *\mathbf {1} } By Möbius inversion: Id = σ ∗ μ {\displaystyle \operatorname {Id} =\sigma *\mu } == Series relations == Two Dirichlet series involving the divisor function are: ∑ n = 1 ∞ σ a ( n ) n s = ζ ( s ) ζ ( s − a ) for s > 1 , s > a + 1 , {\displaystyle \sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)}{n^{s}}}=\zeta (s)\zeta (s-a)\quad {\text{for}}\quad s>1,s>a+1,} where ζ {\displaystyle \zeta } is the Riemann zeta function. The series for d(n) = σ0(n) gives: ∑ n = 1 ∞ d ( n ) n s = ζ 2 ( s ) for s > 1 , {\displaystyle \sum _{n=1}^{\infty }{\frac {d(n)}{n^{s}}}=\zeta ^{2}(s)\quad {\text{for}}\quad s>1,} and a Ramanujan identity ∑ n = 1 ∞ σ a ( n ) σ b ( n ) n s = ζ ( s ) ζ ( s − a ) ζ ( s − b ) ζ ( s − a − b ) ζ ( 2 s − a − b ) , {\displaystyle \sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)\sigma _{b}(n)}{n^{s}}}={\frac {\zeta (s)\zeta (s-a)\zeta (s-b)\zeta (s-a-b)}{\zeta (2s-a-b)}},} which is a special case of the Rankin–Selberg convolution. A Lambert series involving the divisor function is: ∑ n = 1 ∞ q n σ a ( n ) = ∑ n = 1 ∞ ∑ j = 1 ∞ n a q j n = ∑ n = 1 ∞ n a q n 1 − q n = ∑ n = 1 ∞ Li − a ⁡ ( q n ) {\displaystyle \sum _{n=1}^{\infty }q^{n}\sigma _{a}(n)=\sum _{n=1}^{\infty }\sum _{j=1}^{\infty }n^{a}q^{j\,n}=\sum _{n=1}^{\infty }{\frac {n^{a}q^{n}}{1-q^{n}}}=\sum _{n=1}^{\infty }\operatorname {Li} _{-a}(q^{n})} for arbitrary complex |q| ≤ 1 and a ( Li {\displaystyle \operatorname {Li} } is the polylogarithm). This summation also appears as the Fourier series of the Eisenstein series and the invariants of the Weierstrass elliptic functions. For k > 0 {\displaystyle k>0} , there is an explicit series representation with Ramanujan sums c m ( n ) {\displaystyle c_{m}(n)} as : σ k ( n ) = ζ ( k + 1 ) n k ∑ m = 1 ∞ c m ( n ) m k + 1 . {\displaystyle \sigma _{k}(n)=\zeta (k+1)n^{k}\sum _{m=1}^{\infty }{\frac {c_{m}(n)}{m^{k+1}}}.} The computation of the first terms of c m ( n ) {\displaystyle c_{m}(n)} shows its oscillations around the "average value" ζ ( k + 1 ) n k {\displaystyle \zeta (k+1)n^{k}} : σ k ( n ) = ζ ( k + 1 ) n k [ 1 + ( − 1 ) n 2 k + 1 + 2 cos ⁡ 2 π n 3 3 k + 1 + 2 cos ⁡ π n 2 4 k + 1 + ⋯ ] {\displaystyle \sigma _{k}(n)=\zeta (k+1)n^{k}\left[1+{\frac {(-1)^{n}}{2^{k+1}}}+{\frac {2\cos {\frac {2\pi n}{3}}}{3^{k+1}}}+{\frac {2\cos {\frac {\pi n}{2}}}{4^{k+1}}}+\cdots \right]} == Growth rate == In little-o notation, the divisor function satisfies the inequality: for all ε > 0 , d ( n ) = o ( n ε ) . {\displaystyle {\mbox{for all }}\varepsilon >0,\quad d(n)=o(n^{\varepsilon }).} More precisely, Severin Wigert showed that: lim sup n → ∞ log ⁡ d ( n ) log ⁡ n / log ⁡ log ⁡ n = log ⁡ 2. {\displaystyle \limsup _{n\to \infty }{\frac {\log d(n)}{\log n/\log \log n}}=\log 2.} On the other hand, since there are infinitely many prime numbers, lim inf n → ∞ d ( n ) = 2. {\displaystyle \liminf _{n\to \infty }d(n)=2.} In Big-O notation, Peter Gustav Lejeune Dirichlet showed that the average order of the divisor function satisfies the following inequality: for all x ≥ 1 , ∑ n ≤ x d ( n ) = x log ⁡ x + ( 2 γ − 1 ) x + O ( x ) , {\displaystyle {\mbox{for all }}x\geq 1,\sum _{n\leq x}d(n)=x\log x+(2\gamma -1)x+O({\sqrt {x}}),} where γ {\displaystyle \gamma } is Euler's gamma constant. Improving the bound O ( x ) {\displaystyle O({\sqrt {x}})} in this formula is known as Dirichlet's divisor problem. The behaviour of the sigma function is irregular. The asymptotic growth rate of the sigma function can be expressed by: lim sup n → ∞ σ ( n ) n log ⁡ log ⁡ n = e γ , {\displaystyle \limsup _{n\rightarrow \infty }{\frac {\sigma (n)}{n\,\log \log n}}=e^{\gamma },} where lim sup is the limit superior. This result is Grönwall's theorem, published in 1913 (Grönwall 1913). His proof uses Mertens' third theorem, which says that: lim n → ∞ 1 log ⁡ n ∏ p ≤ n p p − 1 = e γ , {\displaystyle \lim _{n\to \infty }{\frac {1}{\log n}}\prod _{p\leq n}{\frac {p}{p-1}}=e^{\gamma },} where p denotes a prime. In 1915, Ramanujan proved that under the assumption of the Riemann hypothesis, Robin's inequality σ ( n ) < e γ n log ⁡ log ⁡ n {\displaystyle \ \sigma (n)<e^{\gamma }n\log \log n} (where γ is the Euler–Mascheroni constant) holds for all sufficiently large n (Ramanujan 1997). The largest known value that violates the inequality is n=5040. In 1984, Guy Robin proved that the inequality is true for all n > 5040 if and only if the Riemann hypothesis is true (Robin 1984). This is Robin's theorem and the inequality became known after him. Robin furthermore showed that if the Riemann hypothesis is false then there are an infinite number of values of n that violate the inequality, and it is known that the smallest such n > 5040 must be superabundant (Akbary & Friggstad 2009). It has been shown that the inequality holds for large odd and square-free integers, and that the Riemann hypothesis is equivalent to the inequality just for n divisible by the fifth power of a prime (Choie et al. 2007). Robin also proved, unconditionally, that the inequality: σ ( n ) < e γ n log ⁡ log ⁡ n + 0.6483 n log ⁡ log ⁡ n {\displaystyle \ \sigma (n)<e^{\gamma }n\log \log n+{\frac {0.6483\ n}{\log \log n}}} holds for all n ≥ 3. A related bound was given by Jeffrey Lagarias in 2002, who proved that the Riemann hypothesis is equivalent to the statement that: σ ( n ) < H n + e H n log ⁡ ( H n ) {\displaystyle \sigma (n)<H_{n}+e^{H_{n}}\log(H_{n})} for every natural number n > 1, where H n {\displaystyle H_{n}} is the nth harmonic number, (Lagarias 2002). == See also == Divisor sum convolutions, lists a few identities involving the divisor functions Euler's totient function, Euler's phi function Refactorable number Table of divisors Unitary divisor == Notes == == References == Akbary, Amir; Friggstad, Zachary (2009), "Superabundant numbers and the Riemann hypothesis" (PDF), American Mathematical Monthly, 116 (3): 273–275, doi:10.4169/193009709X470128, archived from the original (PDF) on 2014-04-11. Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001 Bach, Eric; Shallit, Jeffrey, Algorithmic Number Theory, volume 1, 1996, MIT Press. ISBN 0-262-02405-5, see page 234 in section 8.8. Caveney, Geoffrey; Nicolas, Jean-Louis; Sondow, Jonathan (2011), "Robin's theorem, primes, and a new elementary reformulation of the Riemann Hypothesis" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 11: A33, arXiv:1110.5078, Bibcode:2011arXiv1110.5078C Choie, YoungJu; Lichiardopol, Nicolas; Moree, Pieter; Solé, Patrick (2007), "On Robin's criterion for the Riemann hypothesis", Journal de théorie des nombres de Bordeaux, 19 (2): 357–372, arXiv:math.NT/0604314, doi:10.5802/jtnb.591, ISSN 1246-7405, MR 2394891, S2CID 3207238, Zbl 1163.11059 Gioia, A. A.; Vaidya, A. M. (1967), "Amicable numbers with opposite parity", The American Mathematical Monthly, 74 (8): 969–973, doi:10.2307/2315280, JSTOR 2315280, MR 0220659 Grönwall, Thomas Hakon (1913), "Some asymptotic expressions in the theory of numbers", Transactions of the American Mathematical Society, 14 (1): 113–122, doi:10.1090/S0002-9947-1913-1500940-6 Hardy, G. H.; Wright, E. M. (2008) [1938], An Introduction to the Theory of Numbers, Revised by D. R. Heath-Brown and J. H. Silverman. Foreword by Andrew Wiles. (6th ed.), Oxford: Oxford University Press, ISBN 978-0-19-921986-5, MR 2445243, Zbl 1159.11001 Ivić, Aleksandar (1985), The Riemann zeta-function. The theory of the Riemann zeta-function with applications, A Wiley-Interscience Publication, New York etc.: John Wiley & Sons, pp. 385–440, ISBN 0-471-80634-X, Zbl 0556.10026 Lagarias, Jeffrey C. (2002), "An elementary problem equivalent to the Riemann hypothesis", The American Mathematical Monthly, 109 (6): 534–543, arXiv:math/0008177, doi:10.2307/2695443, ISSN 0002-9890, JSTOR 2695443, MR 1908008, S2CID 15884740 Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77171950 Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77081766 Ramanujan, Srinivasa (1997), "Highly composite numbers, annotated by Jean-Louis Nicolas and Guy Robin", The Ramanujan Journal, 1 (2): 119–153, doi:10.1023/A:1009764017495, ISSN 1382-4090, MR 1606180, S2CID 115619659 Robin, Guy (1984), "Grandes valeurs de la fonction somme des diviseurs et hypothèse de Riemann", Journal de Mathématiques Pures et Appliquées, Neuvième Série, 63 (2): 187–213, ISSN 0021-7824, MR 0774171 Williams, Kenneth S. (2011), Number theory in the spirit of Liouville, London Mathematical Society Student Texts, vol. 76, Cambridge: Cambridge University Press, ISBN 978-0-521-17562-3, Zbl 1227.11002 == External links == Weisstein, Eric W. "Divisor Function". MathWorld. Weisstein, Eric W. "Robin's Theorem". MathWorld. Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions PDF of a paper by Huard, Ou, Spearman, and Williams. Contains elementary (i.e. not relying on the theory of modular forms) proofs of divisor sum convolutions, formulas for the number of ways of representing a number as a sum of triangular numbers, and related results.
Wikipedia/Divisor_function
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. == History == The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2022 the method still yields results. The method is the subject of a monograph Vaughan (1997) by R. C. Vaughan. == Outline == The goal is to prove asymptotic behavior of a series: to show that an ~ F(n) for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. === Setup === The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers an for n = 0, 1, 2, 3, ..., we want some asymptotic information of the type an ~ F(n), where we have some heuristic reason to guess the form taken by F (an ansatz), we write f ( z ) = ∑ a n z n {\displaystyle f(z)=\sum a_{n}z^{n}} a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. === Residues === From that formulation, it follows directly from the residue theorem that I n = ∮ C f ( z ) z − ( n + 1 ) d z = 2 π i a n {\displaystyle I_{n}=\oint _{C}f(z)z^{-(n+1)}\,dz=2\pi ia_{n}} for integers n ≥ 0, where C is a circle of radius r and centred at 0, for any r with 0 < r < 1; in other words, I n {\displaystyle I_{n}} is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take r = 1 directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f may not be defined there. === Singularities on unit circle === The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: ζ = exp ⁡ ( 2 π i r s ) . {\displaystyle \zeta \ =\exp \left({\frac {2\pi ir}{s}}\right).} Here the denominator s, assuming that ⁠r/s⁠ is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ. === Method === The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity ζ into two classes, according to whether s ≤ N or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n). == Discussion == Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. === Waring's problem === In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = −1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. === Vinogradov trigonometric sums === Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral In is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1. == Applications == Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail. == Rademacher's contour == In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution z = exp(2πiτ), so that the contour integral becomes an integral from τ = i to τ = 1 + i. (The number i could be replaced by any number on the upper half-plane, but i is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from i to 1 + i by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). == Notes == == References == Apostol, Tom M. (1990), Modular functions and Dirichlet series in number theory (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97127-8 Mardzhanishvili, K. K. (1985), "Ivan Matveevich Vinogradov: a brief outline of his life and works", I. M. Vinogradov, Selected Works, Berlin{{citation}}: CS1 maint: location missing publisher (link) Rademacher, Hans (1943), "On the expansion of the partition function in a series", Annals of Mathematics, Second Series, 44 (3), The Annals of Mathematics, Vol. 44, No. 3: 416–422, doi:10.2307/1968973, JSTOR 1968973, MR 0008618 Vaughan, R. C. (1997), The Hardy–Littlewood Method, Cambridge Tracts in Mathematics, vol. 125 (2nd ed.), Cambridge University Press, ISBN 978-0-521-57347-4 == Further reading == Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136. == External links == Terence Tao, Heuristic limitations of the circle method, a blog post in 2012
Wikipedia/Circle_method
In mathematics, an L-function is a meromorphic function on the complex plane, associated to one out of several categories of mathematical objects. An L-series is a Dirichlet series, usually convergent on a half-plane, that may give rise to an L-function via analytic continuation. The Riemann zeta function is an example of an L-function, and some important conjectures involving L-functions are the Riemann hypothesis and its generalizations. The theory of L-functions has become a very substantial, and still largely conjectural, part of contemporary analytic number theory. In it, broad generalisations of the Riemann zeta function and the L-series for a Dirichlet character are constructed, and their general properties, in most cases still out of reach of proof, are set out in a systematic way. Because of the Euler product formula there is a deep connection between L-functions and the theory of prime numbers. The mathematical field that studies L-functions is sometimes called analytic theory of L-functions. == Construction == We distinguish at the outset between the L-series, an infinite series representation (for example the Dirichlet series for the Riemann zeta function), and the L-function, the function in the complex plane that is its analytic continuation. The general constructions start with an L-series, defined first as a Dirichlet series, and then by an expansion as an Euler product indexed by prime numbers. Estimates are required to prove that this converges in some right half-plane of the complex numbers. Then one asks whether the function so defined can be analytically continued to the rest of the complex plane (perhaps with some poles). It is this (conjectural) meromorphic continuation to the complex plane which is called an L-function. In the classical cases, already, one knows that useful information is contained in the values and behaviour of the L-function at points where the series representation does not converge. The general term L-function here includes many known types of zeta functions. The Selberg class is an attempt to capture the core properties of L-functions in a set of axioms, thus encouraging the study of the properties of the class rather than of individual functions. == Conjectural information == One can list characteristics of known examples of L-functions that one would wish to see generalized: location of zeros and poles; functional equation, with respect to some vertical line Re(s) = constant; interesting values at integers related to quantities from algebraic K-theory. Detailed work has produced a large body of plausible conjectures, for example about the exact type of functional equation that should apply. Since the Riemann zeta function connects through its values at positive even integers (and negative odd integers) to the Bernoulli numbers, one looks for an appropriate generalisation of that phenomenon. In that case results have been obtained for p-adic L-functions, which describe certain Galois modules. The statistics of the zero distributions are of interest because of their connection to problems like the generalized Riemann hypothesis, distribution of prime numbers, etc. The connections with random matrix theory and quantum chaos are also of interest. The fractal structure of the distributions has been studied using rescaled range analysis. The self-similarity of the zero distribution is quite remarkable, and is characterized by a large fractal dimension of 1.9. This rather large fractal dimension is found over zeros covering at least fifteen orders of magnitude for the Riemann zeta function, and also for the zeros of other L-functions of different orders and conductors. == Birch and Swinnerton-Dyer conjecture == One of the influential examples, both for the history of the more general L-functions and as a still-open research problem, is the conjecture developed by Bryan Birch and Peter Swinnerton-Dyer in the early part of the 1960s. It applies to an elliptic curve E, and the problem it attempts to solve is the prediction of the rank of the elliptic curve over the rational numbers (or another global field): i.e. the number of free generators of its group of rational points. Much previous work in the area began to be unified around a better knowledge of L-functions. This was something like a paradigm example of the nascent theory of L-functions. == Rise of the general theory == This development preceded the Langlands program by a few years, and can be regarded as complementary to it: Langlands' work relates largely to Artin L-functions, which, like Hecke L-functions, were defined several decades earlier, and to L-functions attached to general automorphic representations. Gradually it became clearer in what sense the construction of Hasse–Weil zeta functions might be made to work to provide valid L-functions, in the analytic sense: there should be some input from analysis, which meant automorphic analysis. The general case now unifies at a conceptual level a number of different research programs. == See also == == References == Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021. == External links == "LMFDB, the database of L-functions, modular forms, and related objects". Lavrik, A.F. (2001) [1994]. "L-function". Encyclopedia of Mathematics. EMS Press. Articles about a breakthrough third degree transcendental L-function "Glimpses of a new (mathematical) world". Mathematics. Physorg.com. American Institute of Mathematics. March 13, 2008. Rehmeyer, Julie (April 2, 2008). "Creeping Up on Riemann". Science News. Archived from the original on February 16, 2012. Retrieved August 5, 2008. "Hunting the elusive L-function". Mathematics. Physorg.com. University of Bristol. August 6, 2008.
Wikipedia/L-functions
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates the sum of two or more unknowns, with coefficients, to a constant. An exponential Diophantine equation is one in which unknowns can appear in exponents. Diophantine problems have fewer equations than unknowns and involve finding integers that solve all equations simultaneously. Because such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations, beyond the case of linear and quadratic equations, was an achievement of the twentieth century. == Examples == In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants: == Linear Diophantine equations == === One equation === The simplest linear Diophantine equation takes the form a x + b y = c , {\displaystyle ax+by=c,} where a, b and c are given integers. The solutions are described by the following theorem: This Diophantine equation has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + kv, y − ku), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b. Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that ae + bf = d. If c is a multiple of d, then c = dh for some integer h, and (eh, fh) is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides ax + by. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution (x, y), we have a ( x + k v ) + b ( y − k u ) = a x + b y + k ( a v − b u ) = a x + b y + k ( u d v − v d u ) = a x + b y , {\displaystyle {\begin{aligned}a(x+kv)+b(y-ku)&=ax+by+k(av-bu)\\&=ax+by+k(udv-vdu)\\&=ax+by,\end{aligned}}} showing that (x + kv, y − ku) is another solution. Finally, given two solutions such that a x 1 + b y 1 = a x 2 + b y 2 = c , {\displaystyle ax_{1}+by_{1}=ax_{2}+by_{2}=c,} one deduces that u ( x 2 − x 1 ) + v ( y 2 − y 1 ) = 0. {\displaystyle u(x_{2}-x_{1})+v(y_{2}-y_{1})=0.} As u and v are coprime, Euclid's lemma shows that v divides x2 − x1, and thus that there exists an integer k such that both x 2 − x 1 = k v , y 2 − y 1 = − k u . {\displaystyle x_{2}-x_{1}=kv,\quad y_{2}-y_{1}=-ku.} Therefore, x 2 = x 1 + k v , y 2 = y 1 − k u , {\displaystyle x_{2}=x_{1}+kv,\quad y_{2}=y_{1}-ku,} which completes the proof. === Chinese remainder theorem === The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let n 1 , … , n k {\displaystyle n_{1},\dots ,n_{k}} be k pairwise coprime integers greater than one, a 1 , … , a k {\displaystyle a_{1},\dots ,a_{k}} be k arbitrary integers, and N be the product n 1 ⋯ n k . {\displaystyle n_{1}\cdots n_{k}.} The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution ( x , x 1 , … , x k ) {\displaystyle (x,x_{1},\dots ,x_{k})} such that 0 ≤ x < N, and that the other solutions are obtained by adding to x a multiple of N: x = a 1 + n 1 x 1 ⋮ x = a k + n k x k {\displaystyle {\begin{aligned}x&=a_{1}+n_{1}\,x_{1}\\&\;\;\vdots \\x&=a_{k}+n_{k}\,x_{k}\end{aligned}}} === System of linear Diophantine equations === More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written A X = C , {\displaystyle AX=C,} where A is an m × n matrix of integers, X is an n × 1 column matrix of unknowns and C is an m × 1 column matrix of integers. The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions m × m and n × n, such that the matrix B = [ b i , j ] = U A V {\displaystyle B=[b_{i,j}]=UAV} is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as B ( V − 1 X ) = U C . {\displaystyle B(V^{-1}X)=UC.} Calling yi the entries of V−1X and di those of D = UC, this leads to the system b i , i y i = d i , 1 ≤ i ≤ k 0 y i = d i , k < i ≤ n . {\displaystyle {\begin{aligned}&b_{i,i}y_{i}=d_{i},\quad 1\leq i\leq k\\&0y_{i}=d_{i},\quad k<i\leq n.\end{aligned}}} This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if x = Vy for some column matrix of integers y such that By = D. It follows that the system has a solution if and only if bi,i divides di for i ≤ k and di = 0 for i > k. If this condition is fulfilled, the solutions of the given system are V [ d 1 b 1 , 1 ⋮ d k b k , k h k + 1 ⋮ h n ] , {\displaystyle V\,{\begin{bmatrix}{\frac {d_{1}}{b_{1,1}}}\\\vdots \\{\frac {d_{k}}{b_{k,k}}}\\h_{k+1}\\\vdots \\h_{n}\end{bmatrix}}\,,} where hk+1, …, hn are arbitrary integers. Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form." Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations. == Homogeneous equations == A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem x d + y d − z d = 0. {\displaystyle x^{d}+y^{d}-z^{d}=0.} As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension n − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface. Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for d > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved. For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem). For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation. === Degree two === Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced. For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation x 2 + y 2 = 3 z 2 , {\displaystyle x^{2}+y^{2}=3z^{2},} does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius 3 {\displaystyle {\sqrt {3}}} , centered at the origin. More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist. If a non-trivial integer solution is known, one may produce all other solutions in the following way. ==== Geometric interpretation ==== Let Q ( x 1 , … , x n ) = 0 {\displaystyle Q(x_{1},\ldots ,x_{n})=0} be a homogeneous Diophantine equation, where Q ( x 1 , … , x n ) {\displaystyle Q(x_{1},\ldots ,x_{n})} is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all x i {\displaystyle x_{i}} are zero. If ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} is a non-trivial integer solution of this equation, then ( a 1 , … , a n ) {\displaystyle \left(a_{1},\ldots ,a_{n}\right)} are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if ( p 1 q , … , p n q ) {\textstyle \left({\frac {p_{1}}{q}},\ldots ,{\frac {p_{n}}{q}}\right)} are homogeneous coordinates of a rational point of this hypersurface, where q , p 1 , … , p n {\displaystyle q,p_{1},\ldots ,p_{n}} are integers, then ( p 1 , … , p n ) {\displaystyle \left(p_{1},\ldots ,p_{n}\right)} is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form ( k p 1 d , … , k p n d ) , {\displaystyle \left(k{\frac {p_{1}}{d}},\ldots ,k{\frac {p_{n}}{d}}\right),} where k is any integer, and d is the greatest common divisor of the p i . {\displaystyle p_{i}.} It follows that solving the Diophantine equation Q ( x 1 , … , x n ) = 0 {\displaystyle Q(x_{1},\ldots ,x_{n})=0} is completely reduced to finding the rational points of the corresponding projective hypersurface. ==== Parameterization ==== Let now A = ( a 1 , … , a n ) {\displaystyle A=\left(a_{1},\ldots ,a_{n}\right)} be an integer solution of the equation Q ( x 1 , … , x n ) = 0. {\displaystyle Q(x_{1},\ldots ,x_{n})=0.} As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters. More precisely, one may proceed as follows. By permuting the indices, one may suppose, without loss of generality that a n ≠ 0. {\displaystyle a_{n}\neq 0.} Then one may pass to the affine case by considering the affine hypersurface defined by q ( x 1 , … , x n − 1 ) = Q ( x 1 , … , x n − 1 , 1 ) , {\displaystyle q(x_{1},\ldots ,x_{n-1})=Q(x_{1},\ldots ,x_{n-1},1),} which has the rational point R = ( r 1 , … , r n − 1 ) = ( a 1 a n , … , a n − 1 a n ) . {\displaystyle R=(r_{1},\ldots ,r_{n-1})=\left({\frac {a_{1}}{a_{n}}},\ldots ,{\frac {a_{n-1}}{a_{n}}}\right).} If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables y i = x i − r i {\displaystyle y_{i}=x_{i}-r_{i}} does not change the rational points, and transforms q into a homogeneous polynomial in n − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables. If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case. In the general case, consider the parametric equation of a line passing through R: x 2 = r 2 + t 2 ( x 1 − r 1 ) ⋮ x n − 1 = r n − 1 + t n − 1 ( x 1 − r 1 ) . {\displaystyle {\begin{aligned}x_{2}&=r_{2}+t_{2}(x_{1}-r_{1})\\&\;\;\vdots \\x_{n-1}&=r_{n-1}+t_{n-1}(x_{1}-r_{1}).\end{aligned}}} Substituting this in q, one gets a polynomial of degree two in x1, that is zero for x1 = r1. It is thus divisible by x1 − r1. The quotient is linear in x1, and may be solved for expressing x1 as a quotient of two polynomials of degree at most two in t 2 , … , t n − 1 , {\displaystyle t_{2},\ldots ,t_{n-1},} with integer coefficients: x 1 = f 1 ( t 2 , … , t n − 1 ) f n ( t 2 , … , t n − 1 ) . {\displaystyle x_{1}={\frac {f_{1}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}}.} Substituting this in the expressions for x 2 , … , x n − 1 , {\displaystyle x_{2},\ldots ,x_{n-1},} one gets, for i = 1, …, n − 1, x i = f i ( t 2 , … , t n − 1 ) f n ( t 2 , … , t n − 1 ) , {\displaystyle x_{i}={\frac {f_{i}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}},} where f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} are polynomials of degree at most two with integer coefficients. Then, one can return to the homogeneous case. Let, for i = 1, …, n, F i ( t 1 , … , t n − 1 ) = t 1 2 f i ( t 2 t 1 , … , t n − 1 t 1 ) , {\displaystyle F_{i}(t_{1},\ldots ,t_{n-1})=t_{1}^{2}f_{i}\left({\frac {t_{2}}{t_{1}}},\ldots ,{\frac {t_{n-1}}{t_{1}}}\right),} be the homogenization of f i . {\displaystyle f_{i}.} These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q: x 1 = F 1 ( t 1 , … , t n − 1 ) ⋮ x n = F n ( t 1 , … , t n − 1 ) . {\displaystyle {\begin{aligned}x_{1}&=F_{1}(t_{1},\ldots ,t_{n-1})\\&\;\;\vdots \\x_{n}&=F_{n}(t_{1},\ldots ,t_{n-1}).\end{aligned}}} A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of t 1 , … , t n − 1 . {\displaystyle t_{1},\ldots ,t_{n-1}.} As F 1 , … , F n {\displaystyle F_{1},\ldots ,F_{n}} are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} where, for i = 1, ..., n, x i = k F i ( t 1 , … , t n − 1 ) d , {\displaystyle x_{i}=k\,{\frac {F_{i}(t_{1},\ldots ,t_{n-1})}{d}},} where k is an integer, t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers, and d is the greatest common divisor of the n integers F i ( t 1 , … , t n − 1 ) . {\displaystyle F_{i}(t_{1},\ldots ,t_{n-1}).} One could hope that the coprimality of the ti, could imply that d = 1. Unfortunately this is not the case, as shown in the next section. ==== Example of Pythagorean triples ==== The equation x 2 + y 2 − z 2 = 0 {\displaystyle x^{2}+y^{2}-z^{2}=0} is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples. For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope: y = t ( x + 1 ) . {\displaystyle y=t(x+1).} Putting this in the circle equation x 2 + y 2 − 1 = 0 , {\displaystyle x^{2}+y^{2}-1=0,} one gets x 2 − 1 + t 2 ( x + 1 ) 2 = 0. {\displaystyle x^{2}-1+t^{2}(x+1)^{2}=0.} Dividing by x + 1, results in x − 1 + t 2 ( x + 1 ) = 0 , {\displaystyle x-1+t^{2}(x+1)=0,} which is easy to solve in x: x = 1 − t 2 1 + t 2 . {\displaystyle x={\frac {1-t^{2}}{1+t^{2}}}.} It follows y = t ( x + 1 ) = 2 t 1 + t 2 . {\displaystyle y=t(x+1)={\frac {2t}{1+t^{2}}}.} Homogenizing as described above one gets all solutions as x = k s 2 − t 2 d y = k 2 s t d z = k s 2 + t 2 d , {\displaystyle {\begin{aligned}x&=k\,{\frac {s^{2}-t^{2}}{d}}\\y&=k\,{\frac {2st}{d}}\\z&=k\,{\frac {s^{2}+t^{2}}{d}},\end{aligned}}} where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, d = 2 if s and t are both odd, and d = 1 if one is odd and the other is even. The primitive triples are the solutions where k = 1 and s > t > 0. This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y, == Diophantine analysis == === Typical questions === The questions asked in Diophantine analysis include: Are there any solutions? Are there any solutions beyond some that are easily found by inspection? Are there finitely or infinitely many solutions? Can all solutions be found in theory? Can one in practice compute a full list of solutions? These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles. === Typical problem === The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10A + B = 2(10B + A) − 1, thus 19B − 8A = 1. Inspection gives the result A = 7, B = 3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10. Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts. === 17th and 18th centuries === In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation an + bn = cn has no solutions for any n higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles. In 1657, Fermat attempted to solve the Diophantine equation 61x2 + 1 = y2 (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is x = 226153980, y = 1766319049 (see Chakravala method). === Hilbert's tenth problem === In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist. === Diophantine geometry === Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is not algebraically closed. === Modern research === The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations. The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist. During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates. This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations. === Infinite Diophantine equations === An example of an infinite Diophantine equation is: n = a 2 + 2 b 2 + 3 c 2 + 4 d 2 + 5 e 2 + ⋯ , {\displaystyle n=a^{2}+2b^{2}+3c^{2}+4d^{2}+5e^{2}+\cdots ,} which can be expressed as "How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to: n = a 2 + 4 b 2 + 9 c 2 + 16 d 2 + 25 e 2 + ⋯ , {\displaystyle n=a^{2}+4b^{2}+9c^{2}+16d^{2}+25e^{2}+\cdots ,} which does not always have a solution for positive n. == Exponential Diophantine equations == If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include: the Ramanujan–Nagell equation, 2n − 7 = x2 the equation of the Fermat–Catalan conjecture and Beal's conjecture, am + bn = ck with inequality restrictions on the exponents the Erdős–Moser equation, 1k + 2k + ⋯ + (m − 1)k = mk A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error. == See also == Kuṭṭaka, Aryabhata's algorithm for solving linear Diophantine equations in two unknowns == Notes == == References == Mordell, L. J. (1969). Diophantine equations. Pure and Applied Mathematics. Vol. 30. Academic Press. ISBN 0-12-506250-8. Zbl 0188.34503. Schmidt, Wolfgang M. (1991). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467. Berlin: Springer-Verlag. ISBN 3-540-54058-X. Zbl 0754.11020. Shorey, T. N.; Tijdeman, R. (1986). Exponential Diophantine equations. Cambridge Tracts in Mathematics. Vol. 87. Cambridge University Press. ISBN 0-521-26826-5. Zbl 0606.10011. Smart, Nigel P. (1998). The algorithmic resolution of Diophantine equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. ISBN 0-521-64156-X. Zbl 0907.11001. Stillwell, John (2004). Mathematics and its History (Second ed.). Springer Science + Business Media Inc. ISBN 0-387-95336-1. == Further reading == Bachmakova, Isabelle (1966). "Diophante et Fermat". Revue d'Histoire des Sciences et de Leurs Applications. 19 (4): 289–306. doi:10.3406/rhs.1966.2507. JSTOR 23905707. Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997. Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré" Historia Mathematica 8 (1981), 393–416. Bashmakova, Izabella G., Slavutin, E. I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian]. Bashmakova, Izabella G. "Diophantine Equations and the Evolution of Algebra", American Mathematical Society Translations 147 (2), 1990, pp. 85–100. Translated by A. Shenitzer and H. Grant. Dickson, Leonard Eugene (2005) [1920]. History of the Theory of Numbers. Volume II: Diophantine analysis. Mineola, NY: Dover Publications. ISBN 978-0-486-44233-4. MR 0245500. Zbl 1214.11002. Bogdan Grechuk (2024). Polynomial Diophantine Equations: A Systematic Approach, Springer. Rashed, Roshdi; Houzel, Christian (2013). Les "Arithmétiques" de Diophante. doi:10.1515/9783110336481. ISBN 978-3-11-033593-4. Rashed, Roshdi, Histoire de l'analyse diophantienne classique : D'Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. == External links == Diophantine Equation. From MathWorld at Wolfram Research. "Diophantine equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Dario Alpern's Online Calculator. Retrieved 18 March 2009
Wikipedia/Diophantine_equations
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from O ( n 2 ) {\textstyle O(n^{2})} , which arises if one simply applies the definition of DFT, to O ( n log ⁡ n ) {\textstyle O(n\log n)} , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. As the FFT is merely an algebraic refactoring of terms within the DFT, then the DFT and the FFT both perform mathematically equivalent and interchangeable operations, assuming that all terms are computed with infinite precision. However, in the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. Fast Fourier transforms are widely used for applications in engineering, music, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of 20th Century by the IEEE magazine Computing in Science & Engineering. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory. The best-known FFT algorithms depend upon the factorization of n, but there are FFTs with O ( n log ⁡ n ) {\displaystyle O(n\log n)} complexity for all, even prime, n. Many FFT algorithms depend only on the fact that e − 2 π i / n {\textstyle e^{-2\pi i/n}} is an n'th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/n factor, any FFT algorithm can easily be adapted for it. == History == The development of fast algorithms for DFT was prefigured in Carl Friedrich Gauss's unpublished 1805 work on the orbits of asteroids Pallas and Juno. Gauss wanted to interpolate the orbits from sample observations; his method was very similar to the one that would be published in 1965 by James Cooley and John Tukey, who are generally credited for the invention of the modern generic FFT algorithm. While Gauss's work predated even Joseph Fourier's 1822 results, he did not analyze the method's complexity, and eventually used other methods to achieve the same end. Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical design and analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for O ( n 2 ) {\textstyle O(n^{2})} computation by taking advantage of symmetries, Danielson and Lanczos realized that one could use the periodicity and apply a doubling trick to "double [n] with only slightly more than double the labor", though like Gauss they did not do the analysis to discover that this led to O ( n log ⁡ n ) {\textstyle O(n\log n)} scaling. In 1958, I. J. Good published a paper establishing the prime-factor FFT algorithm that applies to discrete Fourier transforms of size n = n 1 n 2 {\textstyle n=n_{1}n_{2}} , where n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are coprime. James Cooley and John Tukey independently rediscovered these earlier algorithms and published a more general FFT in 1965 that is applicable when n is composite and not necessarily a power of 2, as well as analyzing the O ( n log ⁡ n ) {\textstyle O(n\log n)} scaling. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, an FFT algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but also to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley (both worked at IBM's Watson labs) for implementation. Cooley and Tukey published the paper in a relatively short time of six months. As Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, which, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing. == Definition == Let x 0 , … , x n − 1 {\displaystyle x_{0},\ldots ,x_{n-1}} be complex numbers. The DFT is defined by the formula X k = ∑ m = 0 n − 1 x m e − i 2 π k m / n k = 0 , … , n − 1 , {\displaystyle X_{k}=\sum _{m=0}^{n-1}x_{m}e^{-i2\pi km/n}\qquad k=0,\ldots ,n-1,} where e i 2 π / n {\displaystyle e^{i2\pi /n}} is a primitive n'th root of 1. Evaluating this definition directly requires O ( n 2 ) {\textstyle O(n^{2})} operations: there are n outputs Xk , and each output requires a sum of n terms. An FFT is any method to compute the same results in O ( n log ⁡ n ) {\textstyle O(n\log n)} operations. All known FFT algorithms require O ( n log ⁡ n ) {\textstyle O(n\log n)} operations, although there is no known proof that lower complexity is impossible. To illustrate the savings of an FFT, consider the count of complex multiplications and additions for n = 4096 {\textstyle n=4096} data points. Evaluating the DFT's sums directly involves n 2 {\textstyle n^{2}} complex multiplications and n ( n − 1 ) {\textstyle n(n-1)} complex additions, of which O ( n ) {\textstyle O(n)} operations can be saved by eliminating trivial operations such as multiplications by 1, leaving about 30 million operations. In contrast, the radix-2 Cooley–Tukey algorithm, for n a power of 2, can compute the same result with only ( n / 2 ) log 2 ⁡ ( n ) {\textstyle (n/2)\log _{2}(n)} complex multiplications (again, ignoring simplifications of multiplications by 1 and similar) and n log 2 ⁡ ( n ) {\textstyle n\log _{2}(n)} complex additions, in total about 30,000 operations — a thousand times less than with direct evaluation. In practice, actual performance on modern computers is usually dominated by factors other than the speed of arithmetic operations and the analysis is a complicated subject (for example, see Frigo & Johnson, 2005), but the overall improvement from O ( n 2 ) {\textstyle O(n^{2})} to O ( n log ⁡ n ) {\textstyle O(n\log n)} remains. == Algorithms == === Cooley–Tukey algorithm === By far the most commonly used FFT is the Cooley–Tukey algorithm. This is a divide-and-conquer algorithm that recursively breaks down a DFT of any composite size n = n 1 n 2 {\textstyle n=n_{1}n_{2}} into n 1 {\textstyle n_{1}} smaller DFTs of size n 2 {\textstyle n_{2}} , along with O ( n ) {\displaystyle O(n)} multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966). This method (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had together independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms). The best known use of the Cooley–Tukey algorithm is to divide the transform into two pieces of size n/2 at each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below. === Other FFT algorithms === There are FFT algorithms other than Cooley–Tukey. For n = n 1 n 2 {\textstyle n=n_{1}n_{2}} with coprime n 1 {\textstyle n_{1}} and n 2 {\textstyle n_{2}} , one can use the prime-factor (Good–Thomas) algorithm (PFA), based on the Chinese remainder theorem, to factorize the DFT similarly to Cooley–Tukey but without the twiddle factors. The Rader–Brenner algorithm (1976) is a Cooley–Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of Cooley–Tukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader–Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite n. Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial z n − 1 {\displaystyle z^{n}-1} , here into real-coefficient polynomials of the form z m − 1 {\displaystyle z^{m}-1} and z 2 m + a z m + 1 {\displaystyle z^{2m}+az^{m}+1} . Another polynomial viewpoint is exploited by the Winograd FFT algorithm, which factorizes z n − 1 {\displaystyle z^{n}-1} into cyclotomic polynomials—these often have coefficients of 1, 0, or −1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only O ( n ) {\displaystyle O(n)} irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes. Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime n, expresses a DFT of prime size n as a cyclic convolution of (composite) size n – 1, which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity n k = − ( k − n ) 2 2 + n 2 2 + k 2 2 . {\displaystyle nk=-{\frac {(k-n)^{2}}{2}}+{\frac {n^{2}}{2}}+{\frac {k^{2}}{2}}.} Hexagonal fast Fourier transform (HFFT) aims at computing an efficient FFT for the hexagonally-sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA). == FFT algorithms specialized for real or symmetric data == In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry X n − k = X k ∗ {\displaystyle X_{n-k}=X_{k}^{*}} and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by O ( n ) {\displaystyle O(n)} post-processing operations. It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular. There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of roughly two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O ( n ) {\displaystyle O(n)} pre- and post-processing. == Computational issues == === Bounds on complexity and operation counts === A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not rigorously proved whether DFTs truly require Ω ( n log ⁡ n ) {\textstyle \Omega (n\log n)} (i.e., order n log ⁡ n {\displaystyle n\log n} or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization. Following work by Shmuel Winograd (1978), a tight Θ ( n ) {\displaystyle \Theta (n)} lower bound is known for the number of real multiplications required by an FFT. It can be shown that only 4 n − 2 log 2 2 ⁡ ( n ) − 2 log 2 ⁡ ( n ) − 4 {\textstyle 4n-2\log _{2}^{2}(n)-2\log _{2}(n)-4} irrational real multiplications are required to compute a DFT of power-of-two length n = 2 m {\displaystyle n=2^{m}} . Moreover, explicit algorithms that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). However, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers (Duhamel, 1990; Frigo & Johnson, 2005). A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an Ω ( n log ⁡ n ) {\displaystyle \Omega (n\log n)} lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). Pan (1986) proved an Ω ( n log ⁡ n ) {\displaystyle \Omega (n\log n)} lower bound assuming a bound on a measure of the FFT algorithm's asynchronicity, but the generality of this assumption is unclear. For the case of power-of-two n, Papadimitriou (1979) argued that the number n log 2 ⁡ n {\textstyle n\log _{2}n} of complex-number additions achieved by Cooley–Tukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least 2 N log 2 ⁡ N {\textstyle 2N\log _{2}N} real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than n log 2 ⁡ n {\textstyle n\log _{2}n} complex-number additions (or their equivalent) for power-of-two n. A third problem is to minimize the total number of real multiplications and additions, sometimes called the arithmetic complexity (although in this context it is the exact count and not the asymptotic complexity that is being considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two n was long achieved by the split-radix FFT algorithm, which requires 4 n log 2 ⁡ ( n ) − 6 n + 8 {\textstyle 4n\log _{2}(n)-6n+8} real multiplications and additions for n > 1. This was recently reduced to ∼ 34 9 n log 2 ⁡ n {\textstyle \sim {\frac {34}{9}}n\log _{2}n} (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count (but still better than split radix for n ≥ 256) was shown to be provably optimal for n ≤ 512 under additional restrictions on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a satisfiability modulo theories problem solvable by brute force (Haynal & Haynal, 2011). Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990). === Approximations === All of the FFT algorithms discussed above compute the DFT exactly (i.e. neglecting floating-point errors). A few FFT algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). The Edelman algorithm works equally well for sparse and non-sparse data, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Conversely, if the data are sparse—that is, if only k out of n Fourier coefficients are nonzero—then the complexity can be reduced to O ( k log ⁡ n log ⁡ n / k ) {\displaystyle O(k\log n\log n/k)} , and this has been demonstrated to lead to practical speedups compared to an ordinary FFT for n/k > 32 in a large-n example (n = 222) using a probabilistic approximate algorithm (which estimates the largest k coefficients to several decimal places). === Accuracy === FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley–Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the Cooley–Tukey algorithm is O ( ε log ⁡ n ) {\textstyle O(\varepsilon \log n)} , compared to O ( ε n 3 / 2 ) {\textstyle O(\varepsilon n^{3/2})} for the naïve DFT formula, where 𝜀 is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only O ( ε log ⁡ n ) {\textstyle O(\varepsilon {\sqrt {\log n}})} for Cooley–Tukey and O ( ε n ) {\textstyle O(\varepsilon {\sqrt {n}})} for the naïve DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than Cooley–Tukey, such as the Rader–Brenner algorithm, are intrinsically less stable. In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as O ( n ) {\textstyle O({\sqrt {n}})} for the Cooley–Tukey algorithm (Welch, 1969). Achieving this accuracy requires careful attention to scaling to minimize loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like Cooley–Tukey. To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O ( n log ⁡ n ) {\textstyle O(n\log n)} time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergün, 1995). The values for intermediate frequencies may be obtained by various averaging methods. == Multidimensional FFTs == As defined in the multidimensional DFT article, the multidimensional DFT X k = ∑ n = 0 N − 1 e − 2 π i k ⋅ ( n / N ) x n {\displaystyle X_{\mathbf {k} }=\sum _{\mathbf {n} =0}^{\mathbf {N} -1}e^{-2\pi i\mathbf {k} \cdot (\mathbf {n} /\mathbf {N} )}x_{\mathbf {n} }} transforms an array xn with a d-dimensional vector of indices n = ( n 1 , … , n d ) {\textstyle \mathbf {n} =\left(n_{1},\ldots ,n_{d}\right)} by a set of d nested summations (over n j = 0 … N j − 1 {\textstyle n_{j}=0\ldots N_{j}-1} for each j), where the division n / N = ( n 1 / N 1 , … , n d / N d ) {\textstyle \mathbf {n} /\mathbf {N} =\left(n_{1}/N_{1},\ldots ,n_{d}/N_{d}\right)} is performed element-wise. Equivalently, it is the composition of a sequence of d sets of one-dimensional DFTs, performed along one dimension at a time (in any order). This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of d one-dimensional FFTs (by any of the above algorithms): first you transform along the n1 dimension, then along the n2 dimension, and so on (actually, any ordering works). This method is easily shown to have the usual O ( n log ⁡ n ) {\textstyle O(n\log n)} complexity, where n = n 1 ⋅ n 2 ⋯ n d {\textstyle n=n_{1}\cdot n_{2}\cdots n_{d}} is the total number of data points transformed. In particular, there are n/n1 transforms of size n1, etc., so the complexity of the sequence of FFTs is: n n 1 O ( n 1 log ⁡ n 1 ) + ⋯ + n n d O ( n d log ⁡ n d ) = O ( n [ log ⁡ n 1 + ⋯ + log ⁡ n d ] ) = O ( n log ⁡ n ) . {\displaystyle {\begin{aligned}&{\frac {n}{n_{1}}}O(n_{1}\log n_{1})+\cdots +{\frac {n}{n_{d}}}O(n_{d}\log n_{d})\\[6pt]={}&O\left(n\left[\log n_{1}+\cdots +\log n_{d}\right]\right)=O(n\log n).\end{aligned}}} In two dimensions, the xk can be viewed as an n 1 × n 2 {\displaystyle n_{1}\times n_{2}} matrix, and this algorithm corresponds to first performing the FFT of all the rows (resp. columns), grouping the resulting transformed rows (resp. columns) together as another n 1 × n 2 {\displaystyle n_{1}\times n_{2}} matrix, and then performing the FFT on each of the columns (resp. rows) of this second matrix, and similarly grouping the results into the final result matrix. In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar slice for each fixed n1, and then perform the one-dimensional FFTs along the n1 direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups ( n 1 , … , n d / 2 ) {\textstyle (n_{1},\ldots ,n_{d/2})} and ( n d / 2 + 1 , … , n d ) {\textstyle (n_{d/2+1},\ldots ,n_{d})} that are transformed recursively (rounding if d is not even) (see Frigo and Johnson, 2005). Still, this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional FFT algorithm as the base case, and still has O ( n log ⁡ n ) {\displaystyle O(n\log n)} complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming. There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have O ( n log ⁡ n ) {\textstyle O(n\log n)} complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary Cooley–Tukey algorithm where one divides the transform dimensions by a vector r = ( r 1 , r 2 , … , r d ) {\textstyle \mathbf {r} =\left(r_{1},r_{2},\ldots ,r_{d}\right)} of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. r = ( 1 , … , 1 , r , 1 , … , 1 ) {\textstyle \mathbf {r} =\left(1,\ldots ,1,r,1,\ldots ,1\right)} , is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references. == Other generalizations == An O ( n 5 / 2 log ⁡ n ) {\textstyle O(n^{5/2}\log n)} generalization to spherical harmonics on the sphere S2 with n2 nodes was described by Mohlenkamp, along with an algorithm conjectured (but not proven) to have O ( n 2 log 2 ⁡ ( n ) ) {\textstyle O(n^{2}\log ^{2}(n))} complexity; Mohlenkamp also provides an implementation in the libftsh library. A spherical-harmonic algorithm with O ( n 2 log ⁡ n ) {\textstyle O(n^{2}\log n)} complexity is described by Rokhlin and Tygert. The fast folding algorithm is analogous to the FFT, except that it operates on a series of binned waveforms rather than a series of real or complex scalar values. Rotation (which in the FFT is multiplication by a complex phasor) is a circular shift of the component waveform. Various groups have also published FFT algorithms for non-equispaced data, as reviewed in Potts et al. (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately). More generally there are various other methods of spectral estimation. == Applications == The FFT is used in digital recording, sampling, additive synthesis and pitch correction software. The FFT's importance derives from the fact that it has made working in the frequency domain equally computationally feasible as working in the temporal or spatial domain. Some of the important applications of the FFT include: fast large-integer multiplication algorithms and polynomial multiplication, efficient matrix–vector multiplication for Toeplitz, circulant and other structured matrices, filtering algorithms (see overlap–add and overlap–save methods), fast algorithms for discrete cosine or sine transforms (e.g. fast DCT used for JPEG and MPEG/MP3 encoding and decoding), fast Chebyshev approximation, solving difference equations, computation of isotopic distributions. modulation and demodulation of complex data symbols using orthogonal frequency-division multiplexing (OFDM) for 5G, LTE, Wi-Fi, DSL, and other modern communication systems. An original application of the FFT in finance particularly in the Valuation of options was developed by Marcello Minenna. == Alternatives == The FFT can be a poor choice for analyzing signals with non-stationary frequency content—where the frequency characteristics change over time. DFTs provide a global frequency estimate, assuming that all frequency components are present throughout the entire signal, which makes it challenging to detect short-lived or transient features within signals. For cases where frequency information appears briefly in the signal or generally varies over time, alternatives like the short-time Fourier transform, discrete wavelet transforms, or discrete Hilbert transform can be more suitable. These transforms allow for localized frequency analysis by capturing both frequency and time-based information. == Research areas == Big FFTs With the explosion of big data in fields such as astronomy, the need for 512K FFTs has arisen for certain interferometry calculations. The data collected by projects such as WMAP and LIGO require FFTs of tens of billions of points. As this size does not fit into main memory, so-called out-of-core FFTs are an active area of research. Approximate FFTs For applications such as MRI, it is necessary to compute DFTs for nonuniformly spaced grid points and/or frequencies. Multipole-based approaches can compute approximate quantities with factor of runtime increase. Group FFTs The FFT may also be explained and interpreted using group representation theory allowing for further generalization. A function on any compact group, including non-cyclic, has an expansion in terms of a basis of irreducible matrix elements. It remains an active area of research to find an efficient algorithm for performing this change of basis. Applications including efficient spherical harmonic expansion, analyzing certain Markov processes, robotics etc. Quantum FFTs Shor's fast algorithm for integer factorization on a quantum computer has a subroutine to compute DFT of a binary vector. This is implemented as a sequence of 1- or 2-bit quantum gates now known as quantum FFT, which is effectively the Cooley–Tukey FFT realized as a particular factorization of the Fourier matrix. Extension to these ideas is currently being explored. == Language reference == == See also == FFT-related algorithms: Bit-reversal permutation Goertzel algorithm – computes individual terms of discrete Fourier transform FFT implementations: ALGLIB – a dual/GPL-licensed C++ and C# library (also supporting other languages), with real/complex FFT implementation FFTPACK – another Fortran FFT library (public domain) Architecture-specific: Arm Performance Libraries Intel Integrated Performance Primitives Intel Math Kernel Library Many more implementations are available, for CPUs and GPUs, such as PocketFFT for C++ Other links: Odlyzko–Schönhage algorithm applies the FFT to finite Dirichlet series Schönhage–Strassen algorithm – asymptotically fast multiplication algorithm for large integers Butterfly diagram – a diagram used to describe FFTs Spectral music (involves application of DFT analysis to musical composition) Spectrum analyzer – any of several devices that perform spectrum analysis, often via a DFT Time series Fast Walsh–Hadamard transform Generalized distributive law Least-squares spectral analysis Multidimensional transform Multidimensional discrete convolution Fast Fourier Transform Telescope == References == == Further reading == Brigham, Elbert Oran (1974). The fast Fourier transform (Nachdr. ed.). Englewood Cliffs, N.J: Prentice-Hall. ISBN 978-0-13-307496-3. Briggs, William L.; Henson, Van Emden (1995). The DFT: An Owner's Manual for the Discrete Fourier Transform. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-342-8. Chu, Eleanor; George, Alan (2000). Inside the FFT Black Box: Serial and Parallel Fast Fourier Transform Algorithms. Computational mathematics series. Boca Raton, Fla. London: CRC Press. ISBN 978-0-8493-0270-1. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Chapter 30: Polynomials and the FFT". Introduction to Algorithms (2nd. ed.). Cambridge (Mass.): MIT Press. ISBN 978-0-262-03293-3. Elliott, Douglas F.; Rao, K. Ramamohan (1982). Fast transforms: algorithms, analyses, applications. New York: Academic Press. ISBN 978-0-12-237080-9. Guo, H.; Sitton, G.A.; Burrus, C.S. (1994). "The quick discrete Fourier transform". Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing. Vol. iii. IEEE. pp. III/445–III/448. doi:10.1109/ICASSP.1994.389994. ISBN 978-0-7803-1775-8. S2CID 42639206. Johnson, Steven G.; Frigo, Matteo (January 2007). "A Modified Split-Radix FFT With Fewer Arithmetic Operations" (PDF). IEEE Transactions on Signal Processing. 55 (1): 111–119. Bibcode:2007ITSP...55..111J. CiteSeerX 10.1.1.582.5497. doi:10.1109/TSP.2006.882087. ISSN 1053-587X. S2CID 14772428. Archived (PDF) from the original on 2005-05-26. Nussbaumer, Henri J. (1990). Fast Fourier Transform and Convolution Algorithms. Springer series in information sciences (2., corr. and updated ed.). Berlin Heidelberg: Springer. ISBN 978-3-540-11825-1. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007). "Chapter 12. Fast Fourier Transform". Numerical recipes: the art of scientific computing (PDF). Numerical Recipes (3. ed.). Cambridge: Cambridge University Press. pp. 600–639. ISBN 978-0-521-88068-8. Singleton, R. (June 1969). "A short bibliography on the fast Fourier transform". IEEE Transactions on Audio and Electroacoustics. 17 (2): 166–169. doi:10.1109/TAU.1969.1162040. ISSN 0018-9278. (NB. Contains extensive bibliography.) Prestini, Elena (2004). The evolution of applied harmonic analysis: models of the real world. Applied and numerical harmonic analysis. Boston; Berlin: Springer Media. Section 3.10: Gauss and the asteroids: history of the FFT. ISBN 978-0-8176-4125-2. Van Loan, Charles F. (1992). Computational Frameworks for the Fast Fourier Transform. Frontiers in applied mathematics. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-285-8. Terras, Audrey (1999). Fourier Analysis on Finite Groups and Applications. London Mathematical Society student texts. Cambridge (GB): Cambridge University Press. ISBN 978-0-521-45718-7. (Chap.9 and other chapters) == External links == Fast Fourier Transform for Polynomial Multiplication – fast Fourier algorithm Fast Fourier transform — FFT – FFT programming in C++ – the Cooley–Tukey algorithm Online documentation, links, book, and code Sri Welaratna, "Thirty years of FFT analyzers Archived 2014-01-12 at the Wayback Machine", Sound and Vibration (January 1997, 30th anniversary issue) – a historical review of hardware FFT devices ALGLIB FFT Code – a dual/GPL-licensed multilanguage (VBA, C++, Pascal, etc.) numerical analysis and data processing library SFFT: Sparse Fast Fourier Transform – MIT's sparse (sub-linear time) FFT algorithm, sFFT, and implementation VB6 FFT – a VB6 optimized library implementation with source code Interactive FFT Tutorial – a visual interactive intro to Fourier transforms and FFT methods Introduction to Fourier analysis of time series – tutorial how to use of the Fourier transform in time series analysis
Wikipedia/Fast_Fourier_transform
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence. == Definition == === For events === ==== Two events ==== Two events A {\displaystyle A} and B {\displaystyle B} are independent (often written as A ⊥ B {\displaystyle A\perp B} or A ⊥ ⊥ B {\displaystyle A\perp \!\!\!\perp B} , where the latter symbol often is also used for conditional independence) if and only if their joint probability equals the product of their probabilities:: p. 29 : p. 10  A ∩ B ≠ ∅ {\displaystyle A\cap B\neq \emptyset } indicates that two independent events A {\displaystyle A} and B {\displaystyle B} have common elements in their sample space so that they are not mutually exclusive (mutually exclusive iff A ∩ B = ∅ {\displaystyle A\cap B=\emptyset } ). Why this defines independence is made clear by rewriting with conditional probabilities P ( A ∣ B ) = P ( A ∩ B ) P ( B ) {\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}} as the probability at which the event A {\displaystyle A} occurs provided that the event B {\displaystyle B} has or is assumed to have occurred: P ( A ∩ B ) = P ( A ) P ( B ) ⟺ P ( A ∣ B ) = P ( A ∩ B ) P ( B ) = P ( A ) . {\displaystyle \mathrm {P} (A\cap B)=\mathrm {P} (A)\mathrm {P} (B)\iff \mathrm {P} (A\mid B)={\frac {\mathrm {P} (A\cap B)}{\mathrm {P} (B)}}=\mathrm {P} (A).} and similarly P ( A ∩ B ) = P ( A ) P ( B ) ⟺ P ( B ∣ A ) = P ( A ∩ B ) P ( A ) = P ( B ) . {\displaystyle \mathrm {P} (A\cap B)=\mathrm {P} (A)\mathrm {P} (B)\iff \mathrm {P} (B\mid A)={\frac {\mathrm {P} (A\cap B)}{\mathrm {P} (A)}}=\mathrm {P} (B).} Thus, the occurrence of B {\displaystyle B} does not affect the probability of A {\displaystyle A} , and vice versa. In other words, A {\displaystyle A} and B {\displaystyle B} are independent of each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P ( A ) {\displaystyle \mathrm {P} (A)} or P ( B ) {\displaystyle \mathrm {P} (B)} are 0. Furthermore, the preferred definition makes clear by symmetry that when A {\displaystyle A} is independent of B {\displaystyle B} , B {\displaystyle B} is also independent of A {\displaystyle A} . ==== Odds ==== Stated in terms of odds, two events are independent if and only if the odds ratio of ⁠ A {\displaystyle A} ⁠ and ⁠ B {\displaystyle B} ⁠ is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds: O ( A ∣ B ) = O ( A ) and O ( B ∣ A ) = O ( B ) , {\displaystyle O(A\mid B)=O(A){\text{ and }}O(B\mid A)=O(B),} or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring: O ( A ∣ B ) = O ( A ∣ ¬ B ) and O ( B ∣ A ) = O ( B ∣ ¬ A ) . {\displaystyle O(A\mid B)=O(A\mid \neg B){\text{ and }}O(B\mid A)=O(B\mid \neg A).} The odds ratio can be defined as O ( A ∣ B ) : O ( A ∣ ¬ B ) , {\displaystyle O(A\mid B):O(A\mid \neg B),} or symmetrically for odds of ⁠ B {\displaystyle B} ⁠ given ⁠ A {\displaystyle A} ⁠, and thus is 1 if and only if the events are independent. ==== More than two events ==== A finite set of events { A i } i = 1 n {\displaystyle \{A_{i}\}_{i=1}^{n}} is pairwise independent if every pair of events is independent—that is, if and only if for all distinct pairs of indices m , k {\displaystyle m,k} , A finite set of events is mutually independent if every event is independent of any intersection of the other events: p. 11 —that is, if and only if for every k ≤ n {\displaystyle k\leq n} and for every k indices 1 ≤ i 1 < ⋯ < i k ≤ n {\displaystyle 1\leq i_{1}<\dots <i_{k}\leq n} , This is called the multiplication rule for independent events. It is not a single condition involving only the product of all the probabilities of all single events; it must hold true for all subsets of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true.: p. 30  ==== Log probability and information content ==== Stated in terms of log probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events: log ⁡ P ( A ∩ B ) = log ⁡ P ( A ) + log ⁡ P ( B ) {\displaystyle \log \mathrm {P} (A\cap B)=\log \mathrm {P} (A)+\log \mathrm {P} (B)} In information theory, negative log probability is interpreted as information content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events: I ( A ∩ B ) = I ( A ) + I ( B ) {\displaystyle \mathrm {I} (A\cap B)=\mathrm {I} (A)+\mathrm {I} (B)} See Information content § Additivity of independent events for details. === For real valued random variables === ==== Two random variables ==== Two random variables X {\displaystyle X} and Y {\displaystyle Y} are independent if and only if (iff) the elements of the π-system generated by them are independent; that is to say, for every x {\displaystyle x} and y {\displaystyle y} , the events { X ≤ x } {\displaystyle \{X\leq x\}} and { Y ≤ y } {\displaystyle \{Y\leq y\}} are independent events (as defined above in Eq.1). That is, X {\displaystyle X} and Y {\displaystyle Y} with cumulative distribution functions F X ( x ) {\displaystyle F_{X}(x)} and F Y ( y ) {\displaystyle F_{Y}(y)} , are independent iff the combined random variable ( X , Y ) {\displaystyle (X,Y)} has a joint cumulative distribution function: p. 15  or equivalently, if the probability densities f X ( x ) {\displaystyle f_{X}(x)} and f Y ( y ) {\displaystyle f_{Y}(y)} and the joint probability density f X , Y ( x , y ) {\displaystyle f_{X,Y}(x,y)} exist, f X , Y ( x , y ) = f X ( x ) f Y ( y ) for all x , y . {\displaystyle f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)\quad {\text{for all }}x,y.} ==== More than two random variables ==== A finite set of n {\displaystyle n} random variables { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}\}} is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next. A finite set of n {\displaystyle n} random variables { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}\}} is mutually independent if and only if for any sequence of numbers { x 1 , … , x n } {\displaystyle \{x_{1},\ldots ,x_{n}\}} , the events { X 1 ≤ x 1 } , … , { X n ≤ x n } {\displaystyle \{X_{1}\leq x_{1}\},\ldots ,\{X_{n}\leq x_{n}\}} are mutually independent events (as defined above in Eq.3). This is equivalent to the following condition on the joint cumulative distribution function F X 1 , … , X n ( x 1 , … , x n ) {\displaystyle F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})} . A finite set of n {\displaystyle n} random variables { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}\}} is mutually independent if and only if: p. 16  It is not necessary here to require that the probability distribution factorizes for all possible k {\displaystyle k} -element subsets as in the case for n {\displaystyle n} events. This is not required because e.g. F X 1 , X 2 , X 3 ( x 1 , x 2 , x 3 ) = F X 1 ( x 1 ) ⋅ F X 2 ( x 2 ) ⋅ F X 3 ( x 3 ) {\displaystyle F_{X_{1},X_{2},X_{3}}(x_{1},x_{2},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{2}}(x_{2})\cdot F_{X_{3}}(x_{3})} implies F X 1 , X 3 ( x 1 , x 3 ) = F X 1 ( x 1 ) ⋅ F X 3 ( x 3 ) {\displaystyle F_{X_{1},X_{3}}(x_{1},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{3}}(x_{3})} . The measure-theoretically inclined reader may prefer to substitute events { X ∈ A } {\displaystyle \{X\in A\}} for events { X ≤ x } {\displaystyle \{X\leq x\}} in the above definition, where A {\displaystyle A} is any Borel set. That definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space (which includes topological spaces endowed by appropriate σ-algebras). === For real valued random vectors === Two random vectors X = ( X 1 , … , X m ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{\mathrm {T} }} and Y = ( Y 1 , … , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\mathrm {T} }} are called independent if: p. 187  where F X ( x ) {\displaystyle F_{\mathbf {X} }(\mathbf {x} )} and F Y ( y ) {\displaystyle F_{\mathbf {Y} }(\mathbf {y} )} denote the cumulative distribution functions of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } and F X , Y ( x , y ) {\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )} denotes their joint cumulative distribution function. Independence of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } is often denoted by X ⊥ ⊥ Y {\displaystyle \mathbf {X} \perp \!\!\!\perp \mathbf {Y} } . Written component-wise, X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are called independent if F X 1 , … , X m , Y 1 , … , Y n ( x 1 , … , x m , y 1 , … , y n ) = F X 1 , … , X m ( x 1 , … , x m ) ⋅ F Y 1 , … , Y n ( y 1 , … , y n ) for all x 1 , … , x m , y 1 , … , y n . {\displaystyle F_{X_{1},\ldots ,X_{m},Y_{1},\ldots ,Y_{n}}(x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n})=F_{X_{1},\ldots ,X_{m}}(x_{1},\ldots ,x_{m})\cdot F_{Y_{1},\ldots ,Y_{n}}(y_{1},\ldots ,y_{n})\quad {\text{for all }}x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n}.} === For stochastic processes === ==== For one stochastic process ==== The definition of independence may be extended from random vectors to a stochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at any n {\displaystyle n} times t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} are independent random variables for any n {\displaystyle n} .: p. 163  Formally, a stochastic process { X t } t ∈ T {\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}} is called independent, if and only if for all n ∈ N {\displaystyle n\in \mathbb {N} } and for all t 1 , … , t n ∈ T {\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}} where F X t 1 , … , X t n ( x 1 , … , x n ) = P ( X ( t 1 ) ≤ x 1 , … , X ( t n ) ≤ x n ) {\displaystyle F_{X_{t_{1}},\ldots ,X_{t_{n}}}(x_{1},\ldots ,x_{n})=\mathrm {P} (X(t_{1})\leq x_{1},\ldots ,X(t_{n})\leq x_{n})} . Independence of a stochastic process is a property within a stochastic process, not between two stochastic processes. ==== For two stochastic processes ==== Independence of two stochastic processes is a property between two stochastic processes { X t } t ∈ T {\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}} and { Y t } t ∈ T {\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}} that are defined on the same probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} . Formally, two stochastic processes { X t } t ∈ T {\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}} and { Y t } t ∈ T {\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}} are said to be independent if for all n ∈ N {\displaystyle n\in \mathbb {N} } and for all t 1 , … , t n ∈ T {\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}} , the random vectors ( X ( t 1 ) , … , X ( t n ) ) {\displaystyle (X(t_{1}),\ldots ,X(t_{n}))} and ( Y ( t 1 ) , … , Y ( t n ) ) {\displaystyle (Y(t_{1}),\ldots ,Y(t_{n}))} are independent,: p. 515  i.e. if === Independent σ-algebras === The definitions above (Eq.1 and Eq.2) are both generalized by the following definition of independence for σ-algebras. Let ( Ω , Σ , P ) {\displaystyle (\Omega ,\Sigma ,\mathrm {P} )} be a probability space and let A {\displaystyle {\mathcal {A}}} and B {\displaystyle {\mathcal {B}}} be two sub-σ-algebras of Σ {\displaystyle \Sigma } . A {\displaystyle {\mathcal {A}}} and B {\displaystyle {\mathcal {B}}} are said to be independent if, whenever A ∈ A {\displaystyle A\in {\mathcal {A}}} and B ∈ B {\displaystyle B\in {\mathcal {B}}} , P ( A ∩ B ) = P ( A ) P ( B ) . {\displaystyle \mathrm {P} (A\cap B)=\mathrm {P} (A)\mathrm {P} (B).} Likewise, a finite family of σ-algebras ( τ i ) i ∈ I {\displaystyle (\tau _{i})_{i\in I}} , where I {\displaystyle I} is an index set, is said to be independent if and only if ∀ ( A i ) i ∈ I ∈ ∏ i ∈ I τ i : P ( ⋂ i ∈ I A i ) = ∏ i ∈ I P ( A i ) {\displaystyle \forall \left(A_{i}\right)_{i\in I}\in \prod \nolimits _{i\in I}\tau _{i}\ :\ \mathrm {P} \left(\bigcap \nolimits _{i\in I}A_{i}\right)=\prod \nolimits _{i\in I}\mathrm {P} \left(A_{i}\right)} and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: Two events are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event E ∈ Σ {\displaystyle E\in \Sigma } is, by definition, σ ( { E } ) = { ∅ , E , Ω ∖ E , Ω } . {\displaystyle \sigma (\{E\})=\{\emptyset ,E,\Omega \setminus E,\Omega \}.} Two random variables X {\displaystyle X} and Y {\displaystyle Y} defined over Ω {\displaystyle \Omega } are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable X {\displaystyle X} taking values in some measurable space S {\displaystyle S} consists, by definition, of all subsets of Ω {\displaystyle \Omega } of the form X − 1 ( U ) {\displaystyle X^{-1}(U)} , where U {\displaystyle U} is any measurable subset of S {\displaystyle S} . Using this definition, it is easy to show that if X {\displaystyle X} and Y {\displaystyle Y} are random variables and Y {\displaystyle Y} is constant, then X {\displaystyle X} and Y {\displaystyle Y} are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra { ∅ , Ω } {\displaystyle \{\varnothing ,\Omega \}} . Probability zero events cannot affect independence so independence also holds if Y {\displaystyle Y} is only Pr-almost surely constant. == Properties == === Self-independence === Note that an event is independent of itself if and only if P ( A ) = P ( A ∩ A ) = P ( A ) ⋅ P ( A ) ⟺ P ( A ) = 0 or P ( A ) = 1. {\displaystyle \mathrm {P} (A)=\mathrm {P} (A\cap A)=\mathrm {P} (A)\cdot \mathrm {P} (A)\iff \mathrm {P} (A)=0{\text{ or }}\mathrm {P} (A)=1.} Thus an event is independent of itself if and only if it almost surely occurs or its complement almost surely occurs; this fact is useful when proving zero–one laws. === Expectation and covariance === If X {\displaystyle X} and Y {\displaystyle Y} are statistically independent random variables, then the expectation operator E {\displaystyle \operatorname {E} } has the property E ⁡ [ X n Y m ] = E ⁡ [ X n ] E ⁡ [ Y m ] , {\displaystyle \operatorname {E} [X^{n}Y^{m}]=\operatorname {E} [X^{n}]\operatorname {E} [Y^{m}],} : p. 10  and the covariance cov ⁡ [ X , Y ] {\displaystyle \operatorname {cov} [X,Y]} is zero, as follows from cov ⁡ [ X , Y ] = E ⁡ [ X Y ] − E ⁡ [ X ] E ⁡ [ Y ] . {\displaystyle \operatorname {cov} [X,Y]=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y].} The converse does not hold: if two random variables have a covariance of 0 they still may be not independent. Similarly for two stochastic processes { X t } t ∈ T {\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}} and { Y t } t ∈ T {\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}} : If they are independent, then they are uncorrelated.: p. 151  === Characteristic function === Two random variables X {\displaystyle X} and Y {\displaystyle Y} are independent if and only if the characteristic function of the random vector ( X , Y ) {\displaystyle (X,Y)} satisfies φ ( X , Y ) ( t , s ) = φ X ( t ) ⋅ φ Y ( s ) . {\displaystyle \varphi _{(X,Y)}(t,s)=\varphi _{X}(t)\cdot \varphi _{Y}(s).} In particular the characteristic function of their sum is the product of their marginal characteristic functions: φ X + Y ( t ) = φ X ( t ) ⋅ φ Y ( t ) , {\displaystyle \varphi _{X+Y}(t)=\varphi _{X}(t)\cdot \varphi _{Y}(t),} though the reverse implication is not true. Random variables that satisfy the latter condition are called subindependent. == Examples == === Rolling dice === The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 are not independent. === Drawing cards === If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent. By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are not independent, because a deck that has had a red card removed has proportionately fewer red cards. === Pairwise and mutual independence === Consider the two probability spaces shown. In both cases, P ( A ) = P ( B ) = 1 / 2 {\displaystyle \mathrm {P} (A)=\mathrm {P} (B)=1/2} and P ( C ) = 1 / 4 {\displaystyle \mathrm {P} (C)=1/4} . The events in the first space are pairwise independent because P ( A | B ) = P ( A | C ) = 1 / 2 = P ( A ) {\displaystyle \mathrm {P} (A|B)=\mathrm {P} (A|C)=1/2=\mathrm {P} (A)} , P ( B | A ) = P ( B | C ) = 1 / 2 = P ( B ) {\displaystyle \mathrm {P} (B|A)=\mathrm {P} (B|C)=1/2=\mathrm {P} (B)} , and P ( C | A ) = P ( C | B ) = 1 / 4 = P ( C ) {\displaystyle \mathrm {P} (C|A)=\mathrm {P} (C|B)=1/4=\mathrm {P} (C)} ; but the three events are not mutually independent. The events in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two: P ( A | B C ) = 4 40 4 40 + 1 40 = 4 5 ≠ P ( A ) {\displaystyle \mathrm {P} (A|BC)={\frac {\frac {4}{40}}{{\frac {4}{40}}+{\frac {1}{40}}}}={\tfrac {4}{5}}\neq \mathrm {P} (A)} P ( B | A C ) = 4 40 4 40 + 1 40 = 4 5 ≠ P ( B ) {\displaystyle \mathrm {P} (B|AC)={\frac {\frac {4}{40}}{{\frac {4}{40}}+{\frac {1}{40}}}}={\tfrac {4}{5}}\neq \mathrm {P} (B)} P ( C | A B ) = 4 40 4 40 + 6 40 = 2 5 ≠ P ( C ) {\displaystyle \mathrm {P} (C|AB)={\frac {\frac {4}{40}}{{\frac {4}{40}}+{\frac {6}{40}}}}={\tfrac {2}{5}}\neq \mathrm {P} (C)} In the mutually independent case, however, P ( A | B C ) = 1 16 1 16 + 1 16 = 1 2 = P ( A ) {\displaystyle \mathrm {P} (A|BC)={\frac {\frac {1}{16}}{{\frac {1}{16}}+{\frac {1}{16}}}}={\tfrac {1}{2}}=\mathrm {P} (A)} P ( B | A C ) = 1 16 1 16 + 1 16 = 1 2 = P ( B ) {\displaystyle \mathrm {P} (B|AC)={\frac {\frac {1}{16}}{{\frac {1}{16}}+{\frac {1}{16}}}}={\tfrac {1}{2}}=\mathrm {P} (B)} P ( C | A B ) = 1 16 1 16 + 3 16 = 1 4 = P ( C ) {\displaystyle \mathrm {P} (C|AB)={\frac {\frac {1}{16}}{{\frac {1}{16}}+{\frac {3}{16}}}}={\tfrac {1}{4}}=\mathrm {P} (C)} === Triple-independence but no pairwise-independence === It is possible to create a three-event example in which P ( A ∩ B ∩ C ) = P ( A ) P ( B ) P ( C ) , {\displaystyle \mathrm {P} (A\cap B\cap C)=\mathrm {P} (A)\mathrm {P} (B)\mathrm {P} (C),} and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent). This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example. == Conditional independence == === For events === The events A {\displaystyle A} and B {\displaystyle B} are conditionally independent given an event C {\displaystyle C} when P ( A ∩ B ∣ C ) = P ( A ∣ C ) ⋅ P ( B ∣ C ) {\displaystyle \mathrm {P} (A\cap B\mid C)=\mathrm {P} (A\mid C)\cdot \mathrm {P} (B\mid C)} . === For random variables === Intuitively, two random variables X {\displaystyle X} and Y {\displaystyle Y} are conditionally independent given Z {\displaystyle Z} if, once Z {\displaystyle Z} is known, the value of Y {\displaystyle Y} does not add any additional information about X {\displaystyle X} . For instance, two measurements X {\displaystyle X} and Y {\displaystyle Y} of the same underlying quantity Z {\displaystyle Z} are not independent, but they are conditionally independent given Z {\displaystyle Z} (unless the errors in the two measurements are somehow connected). The formal definition of conditional independence is based on the idea of conditional distributions. If X {\displaystyle X} , Y {\displaystyle Y} , and Z {\displaystyle Z} are discrete random variables, then we define X {\displaystyle X} and Y {\displaystyle Y} to be conditionally independent given Z {\displaystyle Z} if P ( X ≤ x , Y ≤ y | Z = z ) = P ( X ≤ x | Z = z ) ⋅ P ( Y ≤ y | Z = z ) {\displaystyle \mathrm {P} (X\leq x,Y\leq y\;|\;Z=z)=\mathrm {P} (X\leq x\;|\;Z=z)\cdot \mathrm {P} (Y\leq y\;|\;Z=z)} for all x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} such that P ( Z = z ) > 0 {\displaystyle \mathrm {P} (Z=z)>0} . On the other hand, if the random variables are continuous and have a joint probability density function f X Y Z ( x , y , z ) {\displaystyle f_{XYZ}(x,y,z)} , then X {\displaystyle X} and Y {\displaystyle Y} are conditionally independent given Z {\displaystyle Z} if f X Y | Z ( x , y | z ) = f X | Z ( x | z ) ⋅ f Y | Z ( y | z ) {\displaystyle f_{XY|Z}(x,y|z)=f_{X|Z}(x|z)\cdot f_{Y|Z}(y|z)} for all real numbers x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} such that f Z ( z ) > 0 {\displaystyle f_{Z}(z)>0} . If discrete X {\displaystyle X} and Y {\displaystyle Y} are conditionally independent given Z {\displaystyle Z} , then P ( X = x | Y = y , Z = z ) = P ( X = x | Z = z ) {\displaystyle \mathrm {P} (X=x|Y=y,Z=z)=\mathrm {P} (X=x|Z=z)} for any x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} with P ( Z = z ) > 0 {\displaystyle \mathrm {P} (Z=z)>0} . That is, the conditional distribution for X {\displaystyle X} given Y {\displaystyle Y} and Z {\displaystyle Z} is the same as that given Z {\displaystyle Z} alone. A similar equation holds for the conditional probability density functions in the continuous case. Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events. == History == Before 1933, independence, in probability theory, was defined in a verbal manner. For example, de Moivre gave the following definition: “Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other”. If there are n independent events, the probability of the event, that all of them happen was computed as the product of the probabilities of these n events. Apparently, there was the conviction, that this formula was a consequence of the above definition. (Sometimes this was called the Multiplication Theorem.), Of course, a proof of his assertion cannot work without further more formal tacit assumptions. The definition of independence, given in this article, became the standard definition (now used in all books) after it appeared in 1933 as part of Kolmogorov's axiomatization of probability. Kolmogorov credited it to S.N. Bernstein, and quoted a publication which had appeared in Russian in 1927. Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of the Georg Bohlmann. Bohlmann had given the same definition for two events in 1901 and for n events in 1908 In the latter paper, he studied his notion in detail. For example, he gave the first example showing that pairwise independence does not imply mutual independence. Even today, Bohlmann is rarely quoted. More about his work can be found in On the contributions of Georg Bohlmann to probability theory from de:Ulrich Krengel. == See also == Copula (statistics) Independent and identically distributed random variables Mean dependence Normally distributed and uncorrelated does not imply independent == References == == External links == Media related to Independence (probability theory) at Wikimedia Commons
Wikipedia/Independence_(probability_theory)
A number is a mathematical object used to count, measure, and label. The most basic examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with number words. More universally, individual numbers can be represented by symbols, called numerals; for example, "5" is a numeral that represents the number five. As only a relatively small number of symbols can be memorized, basic numerals are commonly organized in a numeral system, which is an organized way to represent any number. The most common numeral system is the Hindu–Arabic numeral system, which allows for the representation of any non-negative integer using a combination of ten fundamental numeric symbols, called digits. In addition to their use in counting and measuring, numerals are often used for labels (as with telephone numbers), for ordering (as with serial numbers), and for codes (as with ISBNs). In common usage, a numeral is not clearly distinguished from the number that it represents. In mathematics, the notion of number has been extended over the centuries to include zero (0), negative numbers, rational numbers such as one half ( 1 2 ) {\displaystyle \left({\tfrac {1}{2}}\right)} , real numbers such as the square root of 2 ( 2 ) {\displaystyle \left({\sqrt {2}}\right)} and π, and complex numbers which extend the real numbers with a square root of −1 (and its combinations with real numbers by adding or subtracting its multiples). Calculations with numbers are done with arithmetical operations, the most familiar being addition, subtraction, multiplication, division, and exponentiation. Their study or usage is called arithmetic, a term which may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world. For example, in Western society, the number 13 is often regarded as unlucky, and "a million" may signify "a lot" rather than an exact quantity. Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today. During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. In modern mathematics, number systems are considered important special examples of more general algebraic structures such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance. == History == === First use of numbers === Bones and other artifacts have been discovered with marks cut into them that many believe are tally marks. These tally marks may have been used for counting elapsed time, such as numbers of days, lunar cycles or keeping records of quantities, such as of animals. A tallying system has no concept of place value (as in modern decimal notation), which limits its representation of large numbers. Nonetheless, tallying systems are considered the first kind of abstract numeral system. The earliest unambiguous numbers in the archaeological record are the Mesopotamian base 60 system (c. 3400 BC); place value emerged in it in the 3rd millennium BCE. The earliest known base 10 system dates to 3100 BC in Egypt. === Numerals === Numbers should be distinguished from numerals, the symbols used to represent numbers. The Egyptians invented the first ciphered numeral system, and the Greeks followed by mapping their counting numbers onto Ionian and Doric alphabets. Roman numerals, a system that used combinations of letters from the Roman alphabet, remained dominant in Europe until the spread of the superior Hindu–Arabic numeral system around the late 14th century, and the Hindu–Arabic numeral system remains the most common system for representing numbers in the world today. The key to the effectiveness of the system was the symbol for zero, which was developed by ancient Indian mathematicians around 500 AD. === Zero === The first known recorded use of zero dates to AD 628, and appeared in the Brāhmasphuṭasiddhānta, the main work of the Indian mathematician Brahmagupta. He treated 0 as a number and discussed operations involving it, including division by zero. By this time (the 7th century), the concept had clearly reached Cambodia in the form of Khmer numerals, and documentation shows the idea later spreading to China and the Islamic world. Brahmagupta's Brāhmasphuṭasiddhānta is the first book that mentions zero as a number, hence Brahmagupta is usually considered the first to formulate the concept of zero. He gave rules of using zero with negative and positive numbers, such as "zero plus a positive number is a positive number, and a negative number plus zero is the negative number". The Brāhmasphuṭasiddhānta is the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit in representing another number as was done by the Babylonians or as a symbol for a lack of quantity as was done by Ptolemy and the Romans. The use of 0 as a number should be distinguished from its use as a placeholder numeral in place-value systems. Many ancient texts used 0. Babylonian and Egyptian texts used it. Egyptians used the word nfr to denote zero balance in double entry accounting. Indian texts used a Sanskrit word Shunye or shunya to refer to the concept of void. In mathematics texts this word often refers to the number zero. In a similar vein, Pāṇini (5th century BC) used the null (zero) operator in the Ashtadhyayi, an early example of an algebraic grammar for the Sanskrit language (also see Pingala). There are other uses of zero before Brahmagupta, though the documentation is not as complete as it is in the Brāhmasphuṭasiddhānta. Records show that the Ancient Greeks seemed unsure about the status of 0 as a number: they asked themselves "How can 'nothing' be something?" leading to interesting philosophical and, by the Medieval period, religious arguments about the nature and existence of 0 and the vacuum. The paradoxes of Zeno of Elea depend in part on the uncertain interpretation of 0. (The ancient Greeks even questioned whether 1 was a number.) The late Olmec people of south-central Mexico began to use a symbol for zero, a shell glyph, in the New World, possibly by the 4th century BC but certainly by 40 BC, which became an integral part of Maya numerals and the Maya calendar. Maya arithmetic used base 4 and base 5 written as base 20. George I. Sánchez in 1961 reported a base 4, base 5 "finger" abacus. By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first documented use of a true zero in the Old World. In later Byzantine manuscripts of his Syntaxis Mathematica (Almagest), the Hellenistic zero had morphed into the Greek letter Omicron (otherwise meaning 70). Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, nulla meaning nothing, not as a symbol. When division produced 0 as a remainder, nihil, also meaning nothing, was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol. === Negative numbers === The abstract concept of negative numbers was recognized as early as 100–50 BC in China. The Nine Chapters on the Mathematical Art contains methods for finding the areas of figures; red rods were used to denote positive coefficients, black for negative. The first reference in a Western work was in the 3rd century AD in Greece. Diophantus referred to the equation equivalent to 4x + 20 = 0 (the solution is negative) in Arithmetica, saying that the equation gave an absurd result. During the 600s, negative numbers were in use in India to represent debts. Diophantus' previous reference was discussed more explicitly by Indian mathematician Brahmagupta, in Brāhmasphuṭasiddhānta in 628, who used negative numbers to produce the general form quadratic formula that remains in use today. However, in the 12th century in India, Bhaskara gives negative roots for quadratic equations but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots". European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be interpreted as debts (chapter 13 of Liber Abaci, 1202) and later as losses (in Flos). René Descartes called them false roots as they cropped up in algebraic polynomials yet he found a way to swap true roots and false roots as well. At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most non-zero digit of the corresponding positive number's numeral. The first use of negative numbers in a European work was by Nicolas Chuquet during the 15th century. He used them as exponents, but referred to them as "absurd numbers". As recently as the 18th century, it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless. === Rational numbers === It is likely that the concept of fractional numbers dates to prehistoric times. The Ancient Egyptians used their Egyptian fraction notation for rational numbers in mathematical texts such as the Rhind Mathematical Papyrus and the Kahun Papyrus. Classical Greek and Indian mathematicians made studies of the theory of rational numbers, as part of the general study of number theory. The best known of these is Euclid's Elements, dating to roughly 300 BC. Of the Indian texts, the most relevant is the Sthananga Sutra, which also covers number theory as part of a general study of mathematics. The concept of decimal fractions is closely linked with decimal place-value notation; the two seem to have developed in tandem. For example, it is common for the Jain math sutra to include calculations of decimal-fraction approximations to pi or the square root of 2. Similarly, Babylonian math texts used sexagesimal (base 60) fractions with great frequency. === Irrational numbers === The earliest known use of irrational numbers was in the Indian Sulba Sutras composed between 800 and 500 BC. The first existence proofs of irrational numbers is usually attributed to Pythagoras, more specifically to the Pythagorean Hippasus of Metapontum, who produced a (most likely geometrical) proof of the irrationality of the square root of 2. The story goes that Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. However, Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers. He could not disprove their existence through logic, but he could not accept irrational numbers, and so, allegedly and frequently reported, he sentenced Hippasus to death by drowning, to impede spreading of this disconcerting news. The 16th century brought final European acceptance of negative integral and fractional numbers. By the 17th century, mathematicians generally used decimal fractions with modern notation. It was not, however, until the 19th century that mathematicians separated irrationals into algebraic and transcendental parts, and once more undertook the scientific study of irrationals. It had remained almost dormant since Euclid. In 1872, the publication of the theories of Karl Weierstrass (by his pupil E. Kossak), Eduard Heine, Georg Cantor, and Richard Dedekind was brought about. In 1869, Charles Méray had taken the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method was completely set forth by Salvatore Pincherle (1880), and Dedekind's has received additional prominence through the author's later work (1888) and endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Kronecker, and Méray. The search for roots of quintic and higher degree equations was an important development, the Abel–Ruffini theorem (Ruffini 1799, Abel 1824) showed that they could not be solved by radicals (formulas involving only arithmetical operations and roots). Hence it was necessary to consider the wider set of algebraic numbers (all solutions to polynomial equations). Galois (1832) linked polynomial equations to group theory giving rise to the field of Galois theory. Simple continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the 19th century were brought into prominence through the writings of Joseph Louis Lagrange. Other noteworthy contributions have been made by Druckenmüller (1837), Kunze (1857), Lemke (1870), and Günther (1872). Ramus first connected the subject with determinants, resulting, with the subsequent contributions of Heine, Möbius, and Günther, in the theory of Kettenbruchdeterminanten. === Transcendental numbers and reals === The existence of transcendental numbers was first established by Liouville (1844, 1851). Hermite proved in 1873 that e is transcendental and Lindemann proved in 1882 that π is transcendental. Finally, Cantor showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite, so there is an uncountably infinite number of transcendental numbers. === Infinity and infinitesimals === The earliest known conception of mathematical infinity appears in the Yajur Veda, an ancient Indian script, which at one point states, "If you remove a part from infinity or add a part to infinity, still what remains is infinity." Infinity was a popular topic of philosophical study among the Jain mathematicians c. 400 BC. They distinguished between five types of infinity: infinite in one and two directions, infinite in area, infinite everywhere, and infinite perpetually. The symbol ∞ {\displaystyle {\text{∞}}} is often used to represent an infinite quantity. Aristotle defined the traditional Western notion of mathematical infinity. He distinguished between actual infinity and potential infinity—the general consensus being that only the latter had true value. Galileo Galilei's Two New Sciences discussed the idea of one-to-one correspondences between infinite sets. But the next major advance in the theory was made by Georg Cantor; in 1895 he published a book about his new set theory, introducing, among other things, transfinite numbers and formulating the continuum hypothesis. In the 1960s, Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. The system of hyperreal numbers represents a rigorous method of treating the ideas about infinite and infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers ever since the invention of infinitesimal calculus by Newton and Leibniz. A modern geometrical version of infinity is given by projective geometry, which introduces "ideal points at infinity", one for each spatial direction. Each family of parallel lines in a given direction is postulated to converge to the corresponding ideal point. This is closely related to the idea of vanishing points in perspective drawing. === Complex numbers === The earliest fleeting reference to square roots of negative numbers occurred in the work of the mathematician and inventor Heron of Alexandria in the 1st century AD, when he considered the volume of an impossible frustum of a pyramid. They became more prominent when in the 16th century closed formulas for the roots of third and fourth degree polynomials were discovered by Italian mathematicians such as Niccolò Fontana Tartaglia and Gerolamo Cardano. It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. This was doubly unsettling since they did not even consider negative numbers to be on firm ground at the time. When René Descartes coined the term "imaginary" for these quantities in 1637, he intended it as derogatory. (See imaginary number for a discussion of the "reality" of complex numbers.) A further source of confusion was that the equation ( − 1 ) 2 = − 1 − 1 = − 1 {\displaystyle \left({\sqrt {-1}}\right)^{2}={\sqrt {-1}}{\sqrt {-1}}=-1} seemed capriciously inconsistent with the algebraic identity a b = a b , {\displaystyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}},} which is valid for positive real numbers a and b, and was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity, and the related identity 1 a = 1 a {\displaystyle {\frac {1}{\sqrt {a}}}={\sqrt {\frac {1}{a}}}} in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led him to the convention of using the special symbol i in place of − 1 {\displaystyle {\sqrt {-1}}} to guard against this mistake. The 18th century saw the work of Abraham de Moivre and Leonhard Euler. De Moivre's formula (1730) states: ( cos ⁡ θ + i sin ⁡ θ ) n = cos ⁡ n θ + i sin ⁡ n θ {\displaystyle (\cos \theta +i\sin \theta )^{n}=\cos n\theta +i\sin n\theta } while Euler's formula of complex analysis (1748) gave us: cos ⁡ θ + i sin ⁡ θ = e i θ . {\displaystyle \cos \theta +i\sin \theta =e^{i\theta }.} The existence of complex numbers was not completely accepted until Caspar Wessel described the geometrical interpretation in 1799. Carl Friedrich Gauss rediscovered and popularized it several years later, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's De algebra tractatus. In the same year, Gauss provided the first generally accepted proof of the fundamental theorem of algebra, showing that every polynomial over the complex numbers has a full set of solutions in that realm. Gauss studied complex numbers of the form a + bi, where a and b are integers (now called Gaussian integers) or rational numbers. His student, Gotthold Eisenstein, studied the type a + bω, where ω is a complex root of x3 − 1 = 0 (now called Eisenstein integers). Other such classes (called cyclotomic fields) of complex numbers derive from the roots of unity xk − 1 = 0 for higher values of k. This generalization is largely due to Ernst Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893. In 1850 Victor Alexandre Puiseux took the key step of distinguishing between poles and branch points, and introduced the concept of essential singular points. This eventually led to the concept of the extended complex plane. === Prime numbers === Prime numbers have been studied throughout recorded history. They are positive integers that are divisible only by 1 and themselves. Euclid devoted one book of the Elements to the theory of primes; in it he proved the infinitude of the primes and the fundamental theorem of arithmetic, and presented the Euclidean algorithm for finding the greatest common divisor of two numbers. In 240 BC, Eratosthenes used the Sieve of Eratosthenes to quickly isolate prime numbers. But most further development of the theory of primes in Europe dates to the Renaissance and later eras. In 1796, Adrien-Marie Legendre conjectured the prime number theorem, describing the asymptotic distribution of primes. Other results concerning the distribution of the primes include Euler's proof that the sum of the reciprocals of the primes diverges, and the Goldbach conjecture, which claims that any sufficiently large even number is the sum of two primes. Yet another conjecture related to the distribution of prime numbers is the Riemann hypothesis, formulated by Bernhard Riemann in 1859. The prime number theorem was finally proved by Jacques Hadamard and Charles de la Vallée-Poussin in 1896. Goldbach and Riemann's conjectures remain unproven and unrefuted. == Main classification == Numbers can be classified into sets, called number sets or number systems, such as the natural numbers and the real numbers. The main number systems are as follows: Each of these number systems is a subset of the next one. So, for example, a rational number is also a real number, and every real number is also a complex number. This can be expressed symbolically as N ⊂ Z ⊂ Q ⊂ R ⊂ C {\displaystyle \mathbb {N} \subset \mathbb {Z} \subset \mathbb {Q} \subset \mathbb {R} \subset \mathbb {C} } . A more complete list of number sets appears in the following diagram. === Natural numbers === The most familiar numbers are the natural numbers (sometimes called whole numbers or counting numbers): 1, 2, 3, and so on. Traditionally, the sequence of natural numbers started with 1 (0 was not even considered a number for the Ancient Greeks.) However, in the 19th century, set theorists and other mathematicians started including 0 (cardinality of the empty set, i.e. 0 elements, where 0 is thus the smallest cardinal number) in the set of natural numbers. Today, different mathematicians use the term to describe both sets, including 0 or not. The mathematical symbol for the set of all natural numbers is N, also written N {\displaystyle \mathbb {N} } , and sometimes N 0 {\displaystyle \mathbb {N} _{0}} or N 1 {\displaystyle \mathbb {N} _{1}} when it is necessary to indicate whether the set should start with 0 or 1, respectively. In the base 10 numeral system, in almost universal use today for mathematical operations, the symbols for natural numbers are written using ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix or base is the number of unique numerical digits, including zero, that a numeral system uses to represent numbers (for the decimal system, the radix is 10). In this base 10 system, the rightmost digit of a natural number has a place value of 1, and every other digit has a place value ten times that of the place value of the digit to its right. In set theory, which is capable of acting as an axiomatic foundation for modern mathematics, natural numbers can be represented by classes of equivalent sets. For instance, the number 3 can be represented as the class of all sets that have exactly three elements. Alternatively, in Peano Arithmetic, the number 3 is represented as sss0, where s is the "successor" function (i.e., 3 is the third successor of 0). Many different representations are possible; all that is needed to formally represent 3 is to inscribe a certain symbol or pattern of symbols three times. === Integers === The negative of a positive integer is defined as a number that produces 0 when it is added to the corresponding positive integer. Negative numbers are usually written with a negative sign (a minus sign). As an example, the negative of 7 is written −7, and 7 + (−7) = 0. When the set of negative numbers is combined with the set of natural numbers (including 0), the result is defined as the set of integers, Z also written Z {\displaystyle \mathbb {Z} } . Here the letter Z comes from German Zahl 'number'. The set of integers forms a ring with the operations addition and multiplication. The natural numbers form a subset of the integers. As there is no common standard for the inclusion or not of zero in the natural numbers, the natural numbers without zero are commonly referred to as positive integers, and the natural numbers with zero are referred to as non-negative integers. === Rational numbers === A rational number is a number that can be expressed as a fraction with an integer numerator and a positive integer denominator. Negative denominators are allowed, but are commonly avoided, as every rational number is equal to a fraction with positive denominator. Fractions are written as two integers, the numerator and the denominator, with a dividing bar between them. The fraction ⁠m/n⁠ represents m parts of a whole divided into n equal parts. Two different fractions may correspond to the same rational number; for example ⁠1/2⁠ and ⁠2/4⁠ are equal, that is: 1 2 = 2 4 . {\displaystyle {1 \over 2}={2 \over 4}.} In general, a b = c d {\displaystyle {a \over b}={c \over d}} if and only if a × d = c × b . {\displaystyle {a\times d}={c\times b}.} If the absolute value of m is greater than n (supposed to be positive), then the absolute value of the fraction is greater than 1. Fractions can be greater than, less than, or equal to 1 and can also be positive, negative, or 0. The set of all rational numbers includes the integers since every integer can be written as a fraction with denominator 1. For example −7 can be written ⁠−7/1⁠. The symbol for the rational numbers is Q (for quotient), also written Q {\displaystyle \mathbb {Q} } . === Real numbers === The symbol for the real numbers is R, also written as R . {\displaystyle \mathbb {R} .} They include all the measuring numbers. Every real number corresponds to a point on the number line. The following paragraph will focus primarily on positive real numbers. The treatment of negative real numbers is according to the general rules of arithmetic and their denotation is simply prefixing the corresponding positive numeral by a minus sign, e.g. −123.456. Most real numbers can only be approximated by decimal numerals, in which a decimal point is placed to the right of the digit with place value 1. Each digit to the right of the decimal point has a place value one-tenth of the place value of the digit to its left. For example, 123.456 represents ⁠123456/1000⁠, or, in words, one hundred, two tens, three ones, four tenths, five hundredths, and six thousandths. A real number can be expressed by a finite number of decimal digits only if it is rational and its fractional part has a denominator whose prime factors are 2 or 5 or both, because these are the prime factors of 10, the base of the decimal system. Thus, for example, one half is 0.5, one fifth is 0.2, one-tenth is 0.1, and one fiftieth is 0.02. Representing other real numbers as decimals would require an infinite sequence of digits to the right of the decimal point. If this infinite sequence of digits follows a pattern, it can be written with an ellipsis or another notation that indicates the repeating pattern. Such a decimal is called a repeating decimal. Thus ⁠1/3⁠ can be written as 0.333..., with an ellipsis to indicate that the pattern continues. Forever repeating 3s are also written as 0.3. It turns out that these repeating decimals (including the repetition of zeroes) denote exactly the rational numbers, i.e., all rational numbers are also real numbers, but it is not the case that every real number is rational. A real number that is not rational is called irrational. A famous irrational real number is the π, the ratio of the circumference of any circle to its diameter. When pi is written as π = 3.14159265358979 … , {\displaystyle \pi =3.14159265358979\dots ,} as it sometimes is, the ellipsis does not mean that the decimals repeat (they do not), but rather that there is no end to them. It has been proved that π is irrational. Another well-known number, proven to be an irrational real number, is 2 = 1.41421356237 … , {\displaystyle {\sqrt {2}}=1.41421356237\dots ,} the square root of 2, that is, the unique positive real number whose square is 2. Both these numbers have been approximated (by computer) to trillions ( 1 trillion = 1012 = 1,000,000,000,000 ) of digits. Not only these prominent examples but almost all real numbers are irrational and therefore have no repeating patterns and hence no corresponding decimal numeral. They can only be approximated by decimal numerals, denoting rounded or truncated real numbers. Any rounded or truncated number is necessarily a rational number, of which there are only countably many. All measurements are, by their nature, approximations, and always have a margin of error. Thus 123.456 is considered an approximation of any real number greater or equal to ⁠1234555/10000⁠ and strictly less than ⁠1234565/10000⁠ (rounding to 3 decimals), or of any real number greater or equal to ⁠123456/1000⁠ and strictly less than ⁠123457/1000⁠ (truncation after the 3. decimal). Digits that suggest a greater accuracy than the measurement itself does, should be removed. The remaining digits are then called significant digits. For example, measurements with a ruler can seldom be made without a margin of error of at least 0.001 m. If the sides of a rectangle are measured as 1.23 m and 4.56 m, then multiplication gives an area for the rectangle between 5.614591 m2 and 5.603011 m2. Since not even the second digit after the decimal place is preserved, the following digits are not significant. Therefore, the result is usually rounded to 5.61. Just as the same fraction can be written in more than one way, the same real number may have more than one decimal representation. For example, 0.999..., 1.0, 1.00, 1.000, ..., all represent the natural number 1. A given real number has only the following decimal representations: an approximation to some finite number of decimal places, an approximation in which a pattern is established that continues for an unlimited number of decimal places or an exact value with only finitely many decimal places. In this last case, the last non-zero digit may be replaced by the digit one smaller followed by an unlimited number of 9s, or the last non-zero digit may be followed by an unlimited number of zeros. Thus the exact real number 3.74 can also be written 3.7399999999... and 3.74000000000.... Similarly, a decimal numeral with an unlimited number of 0s can be rewritten by dropping the 0s to the right of the rightmost nonzero digit, and a decimal numeral with an unlimited number of 9s can be rewritten by increasing by one the rightmost digit less than 9, and changing all the 9s to the right of that digit to 0s. Finally, an unlimited sequence of 0s to the right of a decimal place can be dropped. For example, 6.849999999999... = 6.85 and 6.850000000000... = 6.85. Finally, if all of the digits in a numeral are 0, the number is 0, and if all of the digits in a numeral are an unending string of 9s, you can drop the nines to the right of the decimal place, and add one to the string of 9s to the left of the decimal place. For example, 99.999... = 100. The real numbers also have an important but highly technical property called the least upper bound property. It can be shown that any ordered field, which is also complete, is isomorphic to the real numbers. The real numbers are not, however, an algebraically closed field, because they do not include a solution (often called a square root of minus one) to the algebraic equation x 2 + 1 = 0 {\displaystyle x^{2}+1=0} . === Complex numbers === Moving to a greater level of abstraction, the real numbers can be extended to the complex numbers. This set of numbers arose historically from trying to find closed formulas for the roots of cubic and quadratic polynomials. This led to expressions involving the square roots of negative numbers, and eventually to the definition of a new number: a square root of −1, denoted by i, a symbol assigned by Leonhard Euler, and called the imaginary unit. The complex numbers consist of all numbers of the form a + b i {\displaystyle \,a+bi} where a and b are real numbers. Because of this, complex numbers correspond to points on the complex plane, a vector space of two real dimensions. In the expression a + bi, the real number a is called the real part and b is called the imaginary part. If the real part of a complex number is 0, then the number is called an imaginary number or is referred to as purely imaginary; if the imaginary part is 0, then the number is a real number. Thus the real numbers are a subset of the complex numbers. If the real and imaginary parts of a complex number are both integers, then the number is called a Gaussian integer. The symbol for the complex numbers is C or C {\displaystyle \mathbb {C} } . The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that i is greater than 1, nor is there any meaning in saying that i is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations. == Subclasses of the integers == === Even and odd numbers === An even number is an integer that is "evenly divisible" by two, that is divisible by two without remainder; an odd number is an integer that is not even. (The old-fashioned term "evenly divisible" is now almost always shortened to "divisible".) Any odd number n may be constructed by the formula n = 2k + 1, for a suitable integer k. Starting with k = 0, the first non-negative odd numbers are {1, 3, 5, 7, ...}. Any even number m has the form m = 2k where k is again an integer. Similarly, the first non-negative even numbers are {0, 2, 4, 6, ...}. === Prime numbers === A prime number, often shortened to just prime, is an integer greater than 1 that is not the product of two smaller positive integers. The first few prime numbers are 2, 3, 5, 7, and 11. There is no such simple formula as for odd and even numbers to generate the prime numbers. The primes have been widely studied for more than 2000 years and have led to many questions, only some of which have been answered. The study of these questions belongs to number theory. Goldbach's conjecture is an example of a still unanswered question: "Is every even number the sum of two primes?" One answered question, as to whether every integer greater than one is a product of primes in only one way, except for a rearrangement of the primes, was confirmed; this proven claim is called the fundamental theorem of arithmetic. A proof appears in Euclid's Elements. === Other classes of integers === Many subsets of the natural numbers have been the subject of specific studies and have been named, often after the first mathematician that has studied them. Example of such sets of integers are Fibonacci numbers and perfect numbers. For more examples, see Integer sequence. == Subclasses of the complex numbers == === Algebraic, irrational and transcendental numbers === Algebraic numbers are those that are a solution to a polynomial equation with integer coefficients. Real numbers that are not rational numbers are called irrational numbers. Complex numbers which are not algebraic are called transcendental numbers. The algebraic numbers that are solutions of a monic polynomial equation with integer coefficients are called algebraic integers. === Periods and exponential periods === A period is a complex number that can be expressed as an integral of an algebraic function over an algebraic domain. The periods are a class of numbers which includes, alongside the algebraic numbers, many well known mathematical constants such as the number π. The set of periods form a countable ring and bridge the gap between algebraic and transcendental numbers. The periods can be extended by permitting the integrand to be the product of an algebraic function and the exponential of an algebraic function. This gives another countable ring: the exponential periods. The number e as well as Euler's constant are exponential periods. === Constructible numbers === Motivated by the classical problems of constructions with straightedge and compass, the constructible numbers are those complex numbers whose real and imaginary parts can be constructed using straightedge and compass, starting from a given segment of unit length, in a finite number of steps. === Computable numbers === A computable number, also known as recursive number, is a real number such that there exists an algorithm which, given a positive number n as input, produces the first n digits of the computable number's decimal representation. Equivalent definitions can be given using μ-recursive functions, Turing machines or λ-calculus. The computable numbers are stable for all usual arithmetic operations, including the computation of the roots of a polynomial, and thus form a real closed field that contains the real algebraic numbers. The computable numbers may be viewed as the real numbers that may be exactly represented in a computer: a computable number is exactly represented by its first digits and a program for computing further digits. However, the computable numbers are rarely used in practice. One reason is that there is no algorithm for testing the equality of two computable numbers. More precisely, there cannot exist any algorithm which takes any computable number as an input, and decides in every case if this number is equal to zero or not. The set of computable numbers has the same cardinality as the natural numbers. Therefore, almost all real numbers are non-computable. However, it is very difficult to produce explicitly a real number that is not computable. == Extensions of the concept == === p-adic numbers === The p-adic numbers may have infinitely long expansions to the left of the decimal point, in the same way that real numbers may have infinitely long expansions to the right. The number system that results depends on what base is used for the digits: any base is possible, but a prime number base provides the best mathematical properties. The set of the p-adic numbers contains the rational numbers, but is not contained in the complex numbers. The elements of an algebraic function field over a finite field and algebraic numbers have many similar properties (see Function field analogy). Therefore, they are often regarded as numbers by number theorists. The p-adic numbers play an important role in this analogy. === Hypercomplex numbers === Some number systems that are not included in the complex numbers may be constructed from the real numbers R {\displaystyle \mathbb {R} } in a way that generalize the construction of the complex numbers. They are sometimes called hypercomplex numbers. They include the quaternions H {\displaystyle \mathbb {H} } , introduced by Sir William Rowan Hamilton, in which multiplication is not commutative, the octonions O {\displaystyle \mathbb {O} } , in which multiplication is not associative in addition to not being commutative, and the sedenions S {\displaystyle \mathbb {S} } , in which multiplication is not alternative, neither associative nor commutative. The hypercomplex numbers include one real unit together with 2 n − 1 {\displaystyle 2^{n}-1} imaginary units, for which n is a non-negative integer. For example, quaternions can generally represented using the form a + b i + c j + d k , {\displaystyle a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} ,} where the coefficients a, b, c, d are real numbers, and i, j, k are 3 different imaginary units. Each hypercomplex number system is a subset of the next hypercomplex number system of double dimensions obtained via the Cayley–Dickson construction. For example, the 4-dimensional quaternions H {\displaystyle \mathbb {H} } are a subset of the 8-dimensional quaternions O {\displaystyle \mathbb {O} } , which are in turn a subset of the 16-dimensional sedenions S {\displaystyle \mathbb {S} } , in turn a subset of the 32-dimensional trigintaduonions T {\displaystyle \mathbb {T} } , and ad infinitum with 2 n {\displaystyle 2^{n}} dimensions, with n being any non-negative integer. Including the complex and real numbers and their subsets, this can be expressed symbolically as: N ⊂ Z ⊂ Q ⊂ R ⊂ C ⊂ H ⊂ O ⊂ S ⊂ T ⊂ ⋯ {\displaystyle \mathbb {N} \subset \mathbb {Z} \subset \mathbb {Q} \subset \mathbb {R} \subset \mathbb {C} \subset \mathbb {H} \subset \mathbb {O} \subset \mathbb {S} \subset \mathbb {T} \subset \cdots } Alternatively, starting from the real numbers R {\displaystyle \mathbb {R} } , which have zero complex units, this can be expressed as C 0 ⊂ C 1 ⊂ C 2 ⊂ C 3 ⊂ C 4 ⊂ C 5 ⊂ ⋯ ⊂ C n {\displaystyle {\mathcal {C}}_{0}\subset {\mathcal {C}}_{1}\subset {\mathcal {C}}_{2}\subset {\mathcal {C}}_{3}\subset {\mathcal {C}}_{4}\subset {\mathcal {C}}_{5}\subset \cdots \subset C_{n}} with C n {\displaystyle C_{n}} containing 2 n {\displaystyle 2^{n}} dimensions. === Transfinite numbers === For dealing with infinite sets, the natural numbers have been generalized to the ordinal numbers and to the cardinal numbers. The former gives the ordering of the set, while the latter gives its size. For finite sets, both ordinal and cardinal numbers are identified with the natural numbers. In the infinite case, many ordinal numbers correspond to the same cardinal number. === Nonstandard numbers === Hyperreal numbers are used in non-standard analysis. The hyperreals, or nonstandard reals (usually denoted as *R), denote an ordered field that is a proper extension of the ordered field of real numbers R and satisfies the transfer principle. This principle allows true first-order statements about R to be reinterpreted as true first-order statements about *R. Superreal and surreal numbers extend the real numbers by adding infinitesimally small numbers and infinitely large numbers, but still form fields. == See also == == Notes == == References == Tobias Dantzig, Number, the language of science; a critical survey written for the cultured non-mathematician, New York, The Macmillan Company, 1930. Erich Friedman, What's special about this number? Archived 2018-02-23 at the Wayback Machine Steven Galovich, Introduction to Mathematical Structures, Harcourt Brace Javanovich, 1989, ISBN 0-15-543468-3. Paul Halmos, Naive Set Theory, Springer, 1974, ISBN 0-387-90092-6. Morris Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990. ISBN 978-0195061352 Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge University Press, 1910. Leo Cory, A Brief History of Numbers, Oxford University Press, 2015, ISBN 978-0-19-870259-7. == External links == Nechaev, V.I. (2001) [1994]. "Number". Encyclopedia of Mathematics. EMS Press. Tallant, Jonathan. "Do Numbers Exist". Numberphile. Brady Haran. Archived from the original on 8 March 2016. Retrieved 6 April 2013. In Our Time: Negative Numbers. BBC Radio 4. 9 March 2006. Archived from the original on 31 May 2022. Robin Wilson (7 November 2007). "4000 Years of Numbers". Gresham College. Archived from the original on 8 April 2022. Krulwich, Robert (22 July 2011). "What's the World's Favorite Number?". NPR. Archived from the original on 18 May 2021. Retrieved 17 September 2011.; "Cuddling With 9, Smooching With 8, Winking At 7". NPR. 21 August 2011. Archived from the original on 6 November 2018. Retrieved 17 September 2011. Online Encyclopedia of Integer Sequences
Wikipedia/Number_systems
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie–Hellman key exchange, public-key key encapsulation, and public-key encryption. Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems. == Description == Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret. The two best-known types of public key cryptography are digital signature and public-key encryption: In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine. In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message: 283 —it just conceals the content of the message. One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including: A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved. A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach. == Applications == The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key. Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication. Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication. Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols. == Hybrid cryptosystems == Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection. == Weaknesses == As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost. Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem. === Algorithms === All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however. Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks. === Alteration of public keys === Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion. A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender. A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk. In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker. === Public key infrastructure === One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses. For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream. Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS). Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure. === Unencrypted metadata === Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden. However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging. == History == During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys. === Anticipation === In his 1874 book The Principles of Science, William Stanley Jevons wrote: Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know. Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography." === Classified discovery === In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it. In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange. The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization: I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential. —Ralph Benjamin These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997. === Public discovery === In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years. In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American. Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC. == Examples == Examples of well-regarded asymmetric key techniques for varied purposes include: Diffie–Hellman key exchange protocol DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm ElGamal Elliptic-curve cryptography Elliptic Curve Digital Signature Algorithm (ECDSA) Elliptic-curve Diffie–Hellman (ECDH) Ed25519 and Ed448 (EdDSA) X25519 and X448 (ECDH/EdDH) Various password-authenticated key agreement techniques Paillier cryptosystem RSA encryption algorithm (PKCS#1) Cramer–Shoup cryptosystem YAK authenticated key agreement protocol Examples of asymmetric key algorithms not yet widely adopted include: NTRUEncrypt cryptosystem Kyber McEliece cryptosystem Examples of notable – yet insecure – asymmetric key algorithms include: Merkle–Hellman knapsack cryptosystem Examples of protocols using asymmetric key algorithms include: S/MIME GPG, an implementation of OpenPGP, and an Internet Standard EMV, EMV Certificate Authority IPsec PGP ZRTP, a secure VoIP protocol Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer SILC SSH Bitcoin Off-the-Record Messaging == See also == == Notes == == References == == External links == Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. An account of how GCHQ kept their invention of PKE secret until 1997
Wikipedia/Public-key_cryptography
In mathematics, arithmetic combinatorics is a field in the intersection of number theory, combinatorics, ergodic theory and harmonic analysis. == Scope == Arithmetic combinatorics is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive combinatorics is the special case when only the operations of addition and subtraction are involved. Ben Green explains arithmetic combinatorics in his review of "Additive Combinatorics" by Tao and Vu. == Important results == === Szemerédi's theorem === Szemerédi's theorem is a result in arithmetic combinatorics concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k term arithmetic progression for every k. This conjecture, which became Szemerédi's theorem, generalizes the statement of van der Waerden's theorem. === Green–Tao theorem and extensions === The Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, there exist arithmetic progressions of primes, with k terms, where k can be any natural number. The proof is an extension of Szemerédi's theorem. In 2006, Terence Tao and Tamar Ziegler extended the result to cover polynomial progressions. More precisely, given any integer-valued polynomials P1,..., Pk in one unknown m all with constant term 0, there are infinitely many integers x, m such that x + P1(m), ..., x + Pk(m) are simultaneously prime. The special case when the polynomials are m, 2m, ..., km implies the previous result that there are length k arithmetic progressions of primes. === Breuillard–Green–Tao theorem === The Breuillard–Green–Tao theorem, proved by Emmanuel Breuillard, Ben Green, and Terence Tao in 2011, gives a complete classification of approximate groups. This result can be seen as a nonabelian version of Freiman's theorem, and a generalization of Gromov's theorem on groups of polynomial growth. == Example == If A is a set of N integers, how large or small can the sumset A + A := { x + y : x , y ∈ A } , {\displaystyle A+A:=\{x+y:x,y\in A\},} the difference set A − A := { x − y : x , y ∈ A } , {\displaystyle A-A:=\{x-y:x,y\in A\},} and the product set A ⋅ A := { x y : x , y ∈ A } {\displaystyle A\cdot A:=\{xy:x,y\in A\}} be, and how are the sizes of these sets related? (Not to be confused: the terms difference set and product set can have other meanings.) == Extensions == The sets being studied may also be subsets of algebraic structures other than the integers, for example, groups, rings and fields. == See also == == Notes == == References == Łaba, Izabella (2008). "From harmonic analysis to arithmetic combinatorics". Bull. Amer. Math. Soc. 45 (1): 77–115. doi:10.1090/S0273-0979-07-01189-5. Additive Combinatorics and Theoretical Computer Science Archived 2016-03-04 at the Wayback Machine, Luca Trevisan, SIGACT News, June 2009 Bibak, Khodakhast (2013). "Additive combinatorics with a view towards computer science and cryptography". In Borwein, Jonathan M.; Shparlinski, Igor E.; Zudilin, Wadim (eds.). Number Theory and Related Fields: In Memory of Alf van der Poorten. Vol. 43. New York: Springer Proceedings in Mathematics & Statistics. pp. 99–128. arXiv:1108.3790. doi:10.1007/978-1-4614-6642-0_4. ISBN 978-1-4614-6642-0. S2CID 14979158. Open problems in additive combinatorics, E Croot, V Lev From Rotating Needles to Stability of Waves: Emerging Connections between Combinatorics, Analysis, and PDE, Terence Tao, AMS Notices March 2001 Tao, Terence; Vu, Van H. (2006). Additive combinatorics. Cambridge Studies in Advanced Mathematics. Vol. 105. Cambridge: Cambridge University Press. ISBN 0-521-85386-9. MR 2289012. Zbl 1127.11002. Granville, Andrew; Nathanson, Melvyn B.; Solymosi, József, eds. (2007). Additive Combinatorics. CRM Proceedings & Lecture Notes. Vol. 43. American Mathematical Society. ISBN 978-0-8218-4351-2. Zbl 1124.11003. Mann, Henry (1976). Addition Theorems: The Addition Theorems of Group Theory and Number Theory (Corrected reprint of 1965 Wiley ed.). Huntington, New York: Robert E. Krieger Publishing Company. ISBN 0-88275-418-1. Nathanson, Melvyn B. (1996). Additive Number Theory: the Classical Bases. Graduate Texts in Mathematics. Vol. 164. New York: Springer-Verlag. ISBN 0-387-94656-X. MR 1395371. Nathanson, Melvyn B. (1996). Additive Number Theory: Inverse Problems and the Geometry of Sumsets. Graduate Texts in Mathematics. Vol. 165. New York: Springer-Verlag. ISBN 0-387-94655-1. MR 1477155. == Further reading == Some Highlights of Arithmetic Combinatorics, resources by Terence Tao Additive Combinatorics: Winter 2007, K Soundararajan Earliest Connections of Additive Combinatorics and Computer Science, Luca Trevisan
Wikipedia/Combinatorial_number_theory
The chakravala method (Sanskrit: चक्रवाल विधि) is a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation. It is commonly attributed to Bhāskara II, (c. 1114 – 1185 CE) although some attribute it to Jayadeva (c. 950 ~ 1000 CE). Jayadeva pointed out that Brahmagupta's approach to solving equations of this type could be generalized, and he then described this general method, which was later refined by Bhāskara II in his Bijaganita treatise. He called it the Chakravala method: chakra meaning "wheel" in Sanskrit, a reference to the cyclic nature of the algorithm. C.-O. Selenius held that no European performances at the time of Bhāskara, nor much later, exceeded its marvellous height of mathematical complexity. This method is also known as the cyclic method and contains traces of mathematical induction. == History == Chakra in Sanskrit means cycle. As per popular legend, Chakravala indicates a mythical range of mountains which orbits around the Earth like a wall and not limited by light and darkness. Brahmagupta in 628 CE studied indeterminate quadratic equations, including Pell's equation x 2 = N y 2 + 1 , {\displaystyle \,x^{2}=Ny^{2}+1,} for minimum integers x and y. Brahmagupta could solve it for several N, but not all. Jayadeva and Bhaskara offered the first complete solution to the equation, using the chakravala method to find for x 2 = 61 y 2 + 1 , {\displaystyle \,x^{2}=61y^{2}+1,} the solution x = 1766319049 , y = 226153980. {\displaystyle \,x=1766319049,y=226153980.} This case was notorious for its difficulty, and was first solved in Europe by Brouncker in 1657–58 in response to a challenge by Fermat, using continued fractions. A method for the general problem was first completely described rigorously by Lagrange in 1766. Lagrange's method, however, requires the calculation of 21 successive convergents of the simple continued fraction for the square root of 61, while the chakravala method is much simpler. Selenius, in his assessment of the chakravala method, states "The method represents a best approximation algorithm of minimal length that, owing to several minimization properties, with minimal effort and avoiding large numbers automatically produces the best solutions to the equation. The chakravala method anticipated the European methods by more than a thousand years. But no European performances in the whole field of algebra at a time much later than Bhaskara's, nay nearly equal up to our times, equalled the marvellous complexity and ingenuity of chakravala." Hermann Hankel calls the chakravala method "the finest thing achieved in the theory of numbers before Lagrange." == The method == From Brahmagupta's identity, we observe that for given N, ( x 1 x 2 + N y 1 y 2 ) 2 − N ( x 1 y 2 + x 2 y 1 ) 2 = ( x 1 2 − N y 1 2 ) ( x 2 2 − N y 2 2 ) {\displaystyle (x_{1}x_{2}+Ny_{1}y_{2})^{2}-N(x_{1}y_{2}+x_{2}y_{1})^{2}=(x_{1}^{2}-Ny_{1}^{2})(x_{2}^{2}-Ny_{2}^{2})} For the equation x 2 − N y 2 = k {\displaystyle x^{2}-Ny^{2}=k} , this allows the "composition" (samāsa) of two solution triples ( x 1 , y 1 , k 1 ) {\displaystyle (x_{1},y_{1},k_{1})} and ( x 2 , y 2 , k 2 ) {\displaystyle (x_{2},y_{2},k_{2})} into a new triple ( x 1 x 2 + N y 1 y 2 , x 1 y 2 + x 2 y 1 , k 1 k 2 ) . {\displaystyle (x_{1}x_{2}+Ny_{1}y_{2}\,,\,x_{1}y_{2}+x_{2}y_{1}\,,\,k_{1}k_{2}).} In the general method, the main idea is that any triple ( a , b , k ) {\displaystyle (a,b,k)} (that is, one which satisfies a 2 − N b 2 = k {\displaystyle a^{2}-Nb^{2}=k} ) can be composed with the trivial triple ( m , 1 , m 2 − N ) {\displaystyle (m,1,m^{2}-N)} to get the new triple ( a m + N b , a + b m , k ( m 2 − N ) ) {\displaystyle (am+Nb,a+bm,k(m^{2}-N))} for any m. Assuming we started with a triple for which gcd ( a , b ) = 1 {\displaystyle \gcd(a,b)=1} , this can be scaled down by k (this is Bhaskara's lemma): a 2 − N b 2 = k ⇒ ( a m + N b k ) 2 − N ( a + b m k ) 2 = m 2 − N k {\displaystyle a^{2}-Nb^{2}=k\Rightarrow \left({\frac {am+Nb}{k}}\right)^{2}-N\left({\frac {a+bm}{k}}\right)^{2}={\frac {m^{2}-N}{k}}} Since the signs inside the squares do not matter, the following substitutions are possible: a ← a m + N b | k | , b ← a + b m | k | , k ← m 2 − N k {\displaystyle a\leftarrow {\frac {am+Nb}{|k|}},b\leftarrow {\frac {a+bm}{|k|}},k\leftarrow {\frac {m^{2}-N}{k}}} When a positive integer m is chosen so that (a + bm)/k is an integer, so are the other two numbers in the triple. Among such m, the method chooses one that minimizes the absolute value of m2 − N and hence that of (m2 − N)/k. Then the substitution relations are applied for m equal to the chosen value. This results in a new triple (a, b, k). The process is repeated until a triple with k = 1 {\displaystyle k=1} is found. This method always terminates with a solution (proved by Lagrange in 1768). Optionally, we can stop when k is ±1, ±2, or ±4, as Brahmagupta's approach gives a solution for those cases. == Brahmagupta's composition method == In AD 628, Brahmagupta discovered a general way to find x {\displaystyle x} and y {\displaystyle y} of x 2 = N y 2 + 1 , {\displaystyle x^{2}=Ny^{2}+1,} when given a 2 = N b 2 + k {\displaystyle a^{2}=Nb^{2}+k} , when k is ±1, ±2, or ±4. === k = ±1 === Using Brahmagupta's identity to compose the triple ( a , b , k ) {\displaystyle (a,b,k)} with itself: ( a 2 + N b 2 ) 2 − N ( 2 a b ) 2 = k 2 {\displaystyle (a^{2}+Nb^{2})^{2}-N(2ab)^{2}=k^{2}} ⇒ {\displaystyle \Rightarrow } ( 2 a 2 − k ) 2 − N ( 2 a b ) 2 = k 2 {\displaystyle (2a^{2}-k)^{2}-N(2ab)^{2}=k^{2}} The new triple can be expressed as ( 2 a 2 − k , 2 a b , k 2 ) {\displaystyle (2a^{2}-k,2ab,k^{2})} . Substituting k = − 1 {\displaystyle k=-1} gives a solution: x = 2 a 2 + 1 , y = 2 a b {\displaystyle x=2a^{2}+1,y=2ab} For k = 1 {\displaystyle k=1} , the original ( a , b ) {\displaystyle (a,b)} was already a solution. Substituting k = 1 {\displaystyle k=1} yields a second: x = 2 a 2 − 1 , y = 2 a b {\displaystyle x=2a^{2}-1,y=2ab} === k = ±2 === Again using the equation, ( 2 a 2 − k ) 2 − N ( 2 a b ) 2 = k 2 {\displaystyle (2a^{2}-k)^{2}-N(2ab)^{2}=k^{2}} ⇒ {\displaystyle \Rightarrow } ( 2 a 2 − k k ) 2 − N ( 2 a b k ) 2 = 1 {\displaystyle \left({\frac {2a^{2}-k}{k}}\right)^{2}-N\left({\frac {2ab}{k}}\right)^{2}=1} Substituting k = 2 {\displaystyle k=2} , x = a 2 − 1 , y = a b {\displaystyle x=a^{2}-1,y=ab} Substituting k = − 2 {\displaystyle k=-2} , x = a 2 + 1 , y = a b {\displaystyle x=a^{2}+1,y=ab} === k = 4 === Substituting k = 4 {\displaystyle k=4} into the equation ( 2 a 2 − k k ) 2 − N ( 2 a b k ) 2 = 1 {\displaystyle ({\frac {2a^{2}-k}{k}})^{2}-N({\frac {2ab}{k}})^{2}=1} creates the triple ( a 2 − 2 2 , a b 2 , 1 ) {\displaystyle ({\frac {a^{2}-2}{2}},{\frac {ab}{2}},1)} . Which is a solution if a {\displaystyle a} is even: x = a 2 − 2 2 , y = a b 2 {\displaystyle x={\frac {a^{2}-2}{2}},y={\frac {ab}{2}}} If a is odd, start with the equations ( a 2 ) 2 − N ( b 2 ) 2 = 1 {\displaystyle ({\frac {a}{2}})^{2}-N({\frac {b}{2}})^{2}=1} and ( 2 a 2 − 4 4 ) 2 − N ( 2 a b 4 ) 2 = 1 {\displaystyle ({\frac {2a^{2}-4}{4}})^{2}-N({\frac {2ab}{4}})^{2}=1} . Leading to the triples ( a 2 , b 2 , 1 ) {\displaystyle ({\frac {a}{2}},{\frac {b}{2}},1)} and ( a 2 − 2 2 , a b 2 , 1 ) {\displaystyle ({\frac {a^{2}-2}{2}},{\frac {ab}{2}},1)} . Composing the triples gives ( a 2 ( a 2 − 3 ) ) 2 − N ( b 2 ( a 2 − 1 ) ) 2 = 1 {\displaystyle ({\frac {a}{2}}(a^{2}-3))^{2}-N({\frac {b}{2}}(a^{2}-1))^{2}=1} When a {\displaystyle a} is odd, x = a 2 ( a 2 − 3 ) , y = b 2 ( a 2 − 1 ) {\displaystyle x={\frac {a}{2}}(a^{2}-3),y={\frac {b}{2}}(a^{2}-1)} === k = −4 === When k = − 4 {\displaystyle k=-4} , then ( a 2 ) 2 − N ( b 2 ) 2 = − 1 {\displaystyle ({\frac {a}{2}})^{2}-N({\frac {b}{2}})^{2}=-1} . Composing with itself yields ( a 2 + N b 2 4 ) 2 − N ( a b 2 ) 2 = 1 {\displaystyle ({\frac {a^{2}+Nb^{2}}{4}})^{2}-N({\frac {ab}{2}})^{2}=1} ⇒ {\displaystyle \Rightarrow } ( a 2 + 2 2 ) 2 − N ( a b 2 ) 2 = 1 {\displaystyle ({\frac {a^{2}+2}{2}})^{2}-N({\frac {ab}{2}})^{2}=1} . Again composing itself yields ( ( a 2 + 2 ) 2 + N a 2 b 2 ) 4 ) 2 − N ( a b ( a 2 + 2 ) 2 ) 2 = 1 {\displaystyle ({\frac {(a^{2}+2)^{2}+Na^{2}b^{2})}{4}})^{2}-N({\frac {ab(a^{2}+2)}{2}})^{2}=1} ⇒ {\displaystyle \Rightarrow } ( a 4 + 4 a 2 + 2 2 ) 2 − N ( a b ( a 2 + 2 ) 2 ) 2 = 1 {\displaystyle ({\frac {a^{4}+4a^{2}+2}{2}})^{2}-N({\frac {ab(a^{2}+2)}{2}})^{2}=1} Finally, from the earlier equations, compose the triples ( a 2 + 2 2 , a b 2 , 1 ) {\displaystyle ({\frac {a^{2}+2}{2}},{\frac {ab}{2}},1)} and ( a 4 + 4 a 2 + 2 2 , a b ( a 2 + 2 ) 2 , 1 ) {\displaystyle ({\frac {a^{4}+4a^{2}+2}{2}},{\frac {ab(a^{2}+2)}{2}},1)} , to get ( ( a 2 + 2 ) ( a 4 + 4 a 2 + 2 ) + N a 2 b 2 ( a 2 + 2 ) 4 ) 2 − N ( a b ( a 4 + 4 a 2 + 3 ) 2 ) 2 = 1 {\displaystyle ({\frac {(a^{2}+2)(a^{4}+4a^{2}+2)+Na^{2}b^{2}(a^{2}+2)}{4}})^{2}-N({\frac {ab(a^{4}+4a^{2}+3)}{2}})^{2}=1} ⇒ {\displaystyle \Rightarrow } ( ( a 2 + 2 ) ( a 4 + 4 a 2 + 1 ) 2 ) 2 − N ( a b ( a 2 + 3 ) ( a 2 + 1 ) 2 ) 2 = 1 {\displaystyle ({\frac {(a^{2}+2)(a^{4}+4a^{2}+1)}{2}})^{2}-N({\frac {ab(a^{2}+3)(a^{2}+1)}{2}})^{2}=1} ⇒ {\displaystyle \Rightarrow } ( ( a 2 + 2 ) [ ( a 2 + 1 ) ( a 2 + 3 ) − 2 ) ] 2 ) 2 − N ( a b ( a 2 + 3 ) ( a 2 + 1 ) 2 ) 2 = 1 {\displaystyle ({\frac {(a^{2}+2)[(a^{2}+1)(a^{2}+3)-2)]}{2}})^{2}-N({\frac {ab(a^{2}+3)(a^{2}+1)}{2}})^{2}=1} . This give us the solutions x = ( a 2 + 2 ) [ ( a 2 + 1 ) ( a 2 + 3 ) − 2 ) ] 2 y = a b ( a 2 + 3 ) ( a 2 + 1 ) 2 {\displaystyle x={\frac {(a^{2}+2)[(a^{2}+1)(a^{2}+3)-2)]}{2}}y={\frac {ab(a^{2}+3)(a^{2}+1)}{2}}} (Note, k = − 4 {\displaystyle k=-4} is useful to find a solution to Pell's Equation, but it is not always the smallest integer pair. e.g. 36 2 − 52 ∗ 5 2 = − 4 {\displaystyle 36^{2}-52*5^{2}=-4} . The equation will give you x = 1093436498 , y = 151632270 {\displaystyle x=1093436498,y=151632270} , which when put into Pell's Equation yields 1195601955878350801 − 1195601955878350800 = 1 {\displaystyle 1195601955878350801-1195601955878350800=1} , which works, but so does x = 649 , y = 90 {\displaystyle x=649,y=90} for N = 52 {\displaystyle N=52} . == Examples == === n = 61 === The n = 61 case (determining an integer solution satisfying a 2 − 61 b 2 = 1 {\displaystyle a^{2}-61b^{2}=1} ), issued as a challenge by Fermat many centuries later, was given by Bhaskara as an example. We start with a solution a 2 − 61 b 2 = k {\displaystyle a^{2}-61b^{2}=k} for any k found by any means. In this case we can let b be 1, thus, since 8 2 − 61 ⋅ 1 2 = 3 {\displaystyle 8^{2}-61\cdot 1^{2}=3} , we have the triple ( a , b , k ) = ( 8 , 1 , 3 ) {\displaystyle (a,b,k)=(8,1,3)} . Composing it with ( m , 1 , m 2 − 61 ) {\displaystyle (m,1,m^{2}-61)} gives the triple ( 8 m + 61 , 8 + m , 3 ( m 2 − 61 ) ) {\displaystyle (8m+61,8+m,3(m^{2}-61))} , which is scaled down (or Bhaskara's lemma is directly used) to get: ( 8 m + 61 3 , 8 + m 3 , m 2 − 61 3 ) . {\displaystyle \left({\frac {8m+61}{3}},{\frac {8+m}{3}},{\frac {m^{2}-61}{3}}\right).} For 3 to divide 8 + m {\displaystyle 8+m} and | m 2 − 61 | {\displaystyle |m^{2}-61|} to be minimal, we choose m = 7 {\displaystyle m=7} , so that we have the triple ( 39 , 5 , − 4 ) {\displaystyle (39,5,-4)} . Now that k is −4, we can use Brahmagupta's idea: it can be scaled down to the rational solution ( 39 / 2 , 5 / 2 , − 1 ) {\displaystyle (39/2,5/2,-1)\,} , which composed with itself three times, with m = 7 , 11 , 9 {\displaystyle m={7,11,9}} respectively, when k becomes square and scaling can be applied, this gives ( 1523 / 2 , 195 / 2 , 1 ) {\displaystyle (1523/2,195/2,1)\,} . Finally, such procedure can be repeated until the solution is found (requiring 9 additional self-compositions and 4 additional square-scalings): ( 1766319049 , 226153980 , 1 ) {\displaystyle (1766319049,\,226153980,\,1)} . This is the minimal integer solution. === n = 67 === Suppose we are to solve x 2 − 67 y 2 = 1 {\displaystyle x^{2}-67y^{2}=1} for x and y. We start with a solution a 2 − 67 b 2 = k {\displaystyle a^{2}-67b^{2}=k} for any k found by any means; in this case we can let b be 1, thus producing 8 2 − 67 ⋅ 1 2 = − 3 {\displaystyle 8^{2}-67\cdot 1^{2}=-3} . At each step, we find an m > 0 such that k divides a + bm, and |m2 − 67| is minimal. We then update a, b, and k to a m + N b | k | , a + b m | k | {\displaystyle {\frac {am+Nb}{|k|}},{\frac {a+bm}{|k|}}} and m 2 − N k {\displaystyle {\frac {m^{2}-N}{k}}} respectively. First iteration We have ( a , b , k ) = ( 8 , 1 , − 3 ) {\displaystyle (a,b,k)=(8,1,-3)} . We want a positive integer m such that k divides a + bm, i.e. 3 divides 8 + m, and |m2 − 67| is minimal. The first condition implies that m is of the form 3t + 1 (i.e. 1, 4, 7, 10,… etc.), and among such m, the minimal value is attained for m = 7. Replacing (a, b, k) with ( a m + N b | k | , a + b m | k | , m 2 − N k ) {\displaystyle \left({\frac {am+Nb}{|k|}},{\frac {a+bm}{|k|}},{\frac {m^{2}-N}{k}}\right)} , we get the new values a = ( 8 ⋅ 7 + 67 ⋅ 1 ) / 3 = 41 , b = ( 8 + 1 ⋅ 7 ) / 3 = 5 , k = ( 7 2 − 67 ) / ( − 3 ) = 6 {\displaystyle a=(8\cdot 7+67\cdot 1)/3=41,b=(8+1\cdot 7)/3=5,k=(7^{2}-67)/(-3)=6} . That is, we have the new solution: 41 2 − 67 ⋅ ( 5 ) 2 = 6. {\displaystyle 41^{2}-67\cdot (5)^{2}=6.} At this point, one round of the cyclic algorithm is complete. Second iteration We now repeat the process. We have ( a , b , k ) = ( 41 , 5 , 6 ) {\displaystyle (a,b,k)=(41,5,6)} . We want an m > 0 such that k divides a + bm, i.e. 6 divides 41 + 5m, and |m2 − 67| is minimal. The first condition implies that m is of the form 6t + 5 (i.e. 5, 11, 17,… etc.), and among such m, |m2 − 67| is minimal for m = 5. This leads to the new solution a = (41⋅5 + 67⋅5)/6, etc.: 90 2 − 67 ⋅ 11 2 = − 7. {\displaystyle 90^{2}-67\cdot 11^{2}=-7.} Third iteration For 7 to divide 90 + 11m, we must have m = 2 + 7t (i.e. 2, 9, 16,… etc.) and among such m, we pick m = 9. 221 2 − 67 ⋅ 27 2 = − 2. {\displaystyle 221^{2}-67\cdot 27^{2}=-2.} Final solution At this point, we could continue with the cyclic method (and it would end, after seven iterations), but since the right-hand side is among ±1, ±2, ±4, we can also use Brahmagupta's observation directly. Composing the triple (221, 27, −2) with itself, we get ( 221 2 + 67 ⋅ 27 2 2 ) 2 − 67 ⋅ ( 221 ⋅ 27 ) 2 = 1 , {\displaystyle \left({\frac {221^{2}+67\cdot 27^{2}}{2}}\right)^{2}-67\cdot (221\cdot 27)^{2}=1,} that is, we have the integer solution: 48842 2 − 67 ⋅ 5967 2 = 1. {\displaystyle 48842^{2}-67\cdot 5967^{2}=1.} This equation approximates 67 {\displaystyle {\sqrt {67}}} as 48842 5967 {\displaystyle {\frac {48842}{5967}}} to within a margin of about 2 × 10 − 9 {\displaystyle 2\times 10^{-9}} . == Notes == == References == Florian Cajori (1918), Origin of the Name "Mathematical Induction", The American Mathematical Monthly 25 (5), p. 197-201. George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics (1975). G. R. Kaye, "Indian Mathematics", Isis 2:2 (1919), p. 326–356. Clas-Olaf Selenius, "Rationale of the chakravala process of Jayadeva and Bhaskara II" Archived 2021-11-30 at the Wayback Machine, Historia Mathematica 2 (1975), pp. 167–184. Clas-Olaf Selenius, "Kettenbruchtheoretische Erklärung der zyklischen Methode zur Lösung der Bhaskara-Pell-Gleichung", Acta Acad. Abo. Math. Phys. 23 (10) (1963), pp. 1–44. Hoiberg, Dale & Ramchandani, Indu (2000). Students' Britannica India. Mumbai: Popular Prakashan. ISBN 0-85229-760-2 Goonatilake, Susantha (1998). Toward a Global Science: Mining Civilizational Knowledge. Indiana: Indiana University Press. ISBN 0-253-33388-1. Kumar, Narendra (2004). Science in Ancient India. Delhi: Anmol Publications Pvt Ltd. ISBN 81-261-2056-8 Ploker, Kim (2007) "Mathematics in India". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook New Jersey: Princeton University Press. ISBN 0-691-11485-4 Edwards, Harold (1977). Fermat's Last Theorem. New York: Springer. ISBN 0-387-90230-9. == External links == Introduction to chakravala
Wikipedia/Chakravala_method
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that a x + b y = gcd ( a , b ) . {\displaystyle ax+by=\gcd(a,b).} This is a certifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs. It allows one to compute also, with almost no extra cost, the quotients of a and b by their greatest common divisor. Extended Euclidean algorithm also refers to a very similar algorithm for computing the polynomial greatest common divisor and the coefficients of Bézout's identity of two univariate polynomials. The extended Euclidean algorithm is particularly useful when a and b are coprime. With that provision, x is the modular multiplicative inverse of a modulo b, and y is the modular multiplicative inverse of b modulo a. Similarly, the polynomial extended Euclidean algorithm allows one to compute the multiplicative inverse in algebraic field extensions and, in particular in finite fields of non prime order. It follows that both extended Euclidean algorithms are widely used in cryptography. In particular, the computation of the modular multiplicative inverse is an essential step in the derivation of key-pairs in the RSA public-key encryption method. == Description == The standard Euclidean algorithm proceeds by a succession of Euclidean divisions whose quotients are not used. Only the remainders are kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm with a and b as input, consists of computing a sequence q 1 , … , q k {\displaystyle q_{1},\ldots ,q_{k}} of quotients and a sequence r 0 , … , r k + 1 {\displaystyle r_{0},\ldots ,r_{k+1}} of remainders such that r 0 = a r 1 = b ⋮ r i + 1 = r i − 1 − q i r i and 0 ≤ r i + 1 < | r i | (this defines q i ) ⋮ {\displaystyle {\begin{aligned}r_{0}&=a\\r_{1}&=b\\&\,\,\,\vdots \\r_{i+1}&=r_{i-1}-q_{i}r_{i}\quad {\text{and}}\quad 0\leq r_{i+1}<|r_{i}|\quad {\text{(this defines }}q_{i})\\&\,\,\,\vdots \end{aligned}}} It is the main property of Euclidean division that the inequalities on the right define uniquely q i {\displaystyle q_{i}} and r i + 1 {\displaystyle r_{i+1}} from r i − 1 {\displaystyle r_{i-1}} and r i . {\displaystyle r_{i}.} The computation stops when one reaches a remainder r k + 1 {\displaystyle r_{k+1}} which is zero; the greatest common divisor is then the last non zero remainder r k . {\displaystyle r_{k}.} The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows r 0 = a r 1 = b s 0 = 1 s 1 = 0 t 0 = 0 t 1 = 1 ⋮ ⋮ r i + 1 = r i − 1 − q i r i and 0 ≤ r i + 1 < | r i | (this defines q i ) s i + 1 = s i − 1 − q i s i t i + 1 = t i − 1 − q i t i ⋮ {\displaystyle {\begin{aligned}r_{0}&=a&r_{1}&=b\\s_{0}&=1&s_{1}&=0\\t_{0}&=0&t_{1}&=1\\&\,\,\,\vdots &&\,\,\,\vdots \\r_{i+1}&=r_{i-1}-q_{i}r_{i}&{\text{and }}0&\leq r_{i+1}<|r_{i}|&{\text{(this defines }}q_{i}{\text{)}}\\s_{i+1}&=s_{i-1}-q_{i}s_{i}\\t_{i+1}&=t_{i-1}-q_{i}t_{i}\\&\,\,\,\vdots \end{aligned}}} The computation also stops when r k + 1 = 0 {\displaystyle r_{k+1}=0} and gives r k {\displaystyle r_{k}} is the greatest common divisor of the input a = r 0 {\displaystyle a=r_{0}} and b = r 1 . {\displaystyle b=r_{1}.} The Bézout coefficients are s k {\displaystyle s_{k}} and t k , {\displaystyle t_{k},} that is gcd ( a , b ) = r k = a s k + b t k {\displaystyle \gcd(a,b)=r_{k}=as_{k}+bt_{k}} The quotients of a and b by their greatest common divisor are given by s k + 1 = ± b gcd ( a , b ) {\displaystyle s_{k+1}=\pm {\frac {b}{\gcd(a,b)}}} and t k + 1 = ± a gcd ( a , b ) {\displaystyle t_{k+1}=\pm {\frac {a}{\gcd(a,b)}}} Moreover, if a and b are both positive and gcd ( a , b ) ≠ min ( a , b ) {\displaystyle \gcd(a,b)\neq \min(a,b)} , then | s i | ≤ ⌊ b 2 gcd ( a , b ) ⌋ and | t i | ≤ ⌊ a 2 gcd ( a , b ) ⌋ {\displaystyle |s_{i}|\leq \left\lfloor {\frac {b}{2\gcd(a,b)}}\right\rfloor \quad {\text{and}}\quad |t_{i}|\leq \left\lfloor {\frac {a}{2\gcd(a,b)}}\right\rfloor } for 0 ≤ i ≤ k , {\displaystyle 0\leq i\leq k,} where ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } denotes the integral part of x, that is the greatest integer not greater than x. This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is the minimal pair of Bézout coefficients, as being the unique pair satisfying both above inequalities. It also means that the algorithm can be done without integer overflow by a computer program using integers of a fixed size that is larger than that of a and b. === Example === The following table shows how the extended Euclidean algorithm proceeds with input 240 and 46. The greatest common divisor is the last non zero entry, 2 in the column "remainder". The computation stops at row 6, because the remainder in it is 0. Bézout coefficients appear in the last two columns of the second-to-last row. In fact, it is easy to verify that −9 × 240 + 47 × 46 = 2. Finally the last two entries 23 and −120 of the last row are, up to the sign, the quotients of the input 46 and 240 by the greatest common divisor 2. === Proof === As 0 ≤ r i + 1 < | r i | , {\displaystyle 0\leq r_{i+1}<|r_{i}|,} the sequence of the r i {\displaystyle r_{i}} is a decreasing sequence of nonnegative integers (from i = 2 on). Thus it must stop with some r k + 1 = 0. {\displaystyle r_{k+1}=0.} This proves that the algorithm stops eventually. As r i + 1 = r i − 1 − r i q i , {\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i},} the greatest common divisor is the same for ( r i − 1 , r i ) {\displaystyle (r_{i-1},r_{i})} and ( r i , r i + 1 ) . {\displaystyle (r_{i},r_{i+1}).} This shows that the greatest common divisor of the input a = r 0 , b = r 1 {\displaystyle a=r_{0},b=r_{1}} is the same as that of r k , r k + 1 = 0. {\displaystyle r_{k},r_{k+1}=0.} This proves that r k {\displaystyle r_{k}} is the greatest common divisor of a and b. (Until this point, the proof is the same as that of the classical Euclidean algorithm.) As a = r 0 {\displaystyle a=r_{0}} and b = r 1 , {\displaystyle b=r_{1},} we have a s i + b t i = r i {\displaystyle as_{i}+bt_{i}=r_{i}} for i = 0 and 1. The relation follows by induction for all i > 1 {\displaystyle i>1} : r i + 1 = r i − 1 − r i q i = ( a s i − 1 + b t i − 1 ) − ( a s i + b t i ) q i = ( a s i − 1 − a s i q i ) + ( b t i − 1 − b t i q i ) = a s i + 1 + b t i + 1 . {\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i}=(as_{i-1}+bt_{i-1})-(as_{i}+bt_{i})q_{i}=(as_{i-1}-as_{i}q_{i})+(bt_{i-1}-bt_{i}q_{i})=as_{i+1}+bt_{i+1}.} Thus s k {\displaystyle s_{k}} and t k {\displaystyle t_{k}} are Bézout coefficients. Consider the matrix A i = ( s i − 1 s i t i − 1 t i ) . {\displaystyle A_{i}={\begin{pmatrix}s_{i-1}&s_{i}\\t_{i-1}&t_{i}\end{pmatrix}}.} The recurrence relation may be rewritten in matrix form A i + 1 = A i ⋅ ( 0 1 1 − q i ) . {\displaystyle A_{i+1}=A_{i}\cdot {\begin{pmatrix}0&1\\1&-q_{i}\end{pmatrix}}.} The matrix A 1 {\displaystyle A_{1}} is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant of A i {\displaystyle A_{i}} is ( − 1 ) i − 1 . {\displaystyle (-1)^{i-1}.} In particular, for i = k + 1 , {\displaystyle i=k+1,} we have s k t k + 1 − t k s k + 1 = ( − 1 ) k . {\displaystyle s_{k}t_{k+1}-t_{k}s_{k+1}=(-1)^{k}.} Viewing this as a Bézout's identity, this shows that s k + 1 {\displaystyle s_{k+1}} and t k + 1 {\displaystyle t_{k+1}} are coprime. The relation a s k + 1 + b t k + 1 = 0 {\displaystyle as_{k+1}+bt_{k+1}=0} that has been proved above and Euclid's lemma show that s k + 1 {\displaystyle s_{k+1}} divides b, that is that b = d s k + 1 {\displaystyle b=ds_{k+1}} for some integer d. Dividing by s k + 1 {\displaystyle s_{k+1}} the relation a s k + 1 + b t k + 1 = 0 {\displaystyle as_{k+1}+bt_{k+1}=0} gives a = − d t k + 1 . {\displaystyle a=-dt_{k+1}.} So, s k + 1 {\displaystyle s_{k+1}} and − t k + 1 {\displaystyle -t_{k+1}} are coprime integers that are the quotients of a and b by a common factor, which is thus their greatest common divisor or its opposite. To prove the last assertion, assume that a and b are both positive and gcd ( a , b ) ≠ min ( a , b ) {\displaystyle \gcd(a,b)\neq \min(a,b)} . Then, a ≠ b {\displaystyle a\neq b} , and if a < b {\displaystyle a<b} , it can be seen that the s and t sequences for (a,b) under the EEA are, up to initial 0s and 1s, the t and s sequences for (b,a). The definitions then show that the (a,b) case reduces to the (b,a) case. So assume that a > b {\displaystyle a>b} without loss of generality. It can be seen that s 2 {\displaystyle s_{2}} is 1 and s 3 {\displaystyle s_{3}} (which exists by gcd ( a , b ) ≠ min ( a , b ) {\displaystyle \gcd(a,b)\neq \min(a,b)} ) is a negative integer. Thereafter, the s i {\displaystyle s_{i}} alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact that q i ≥ 1 {\displaystyle q_{i}\geq 1} for 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} , the case i = 1 {\displaystyle i=1} holds because a > b {\displaystyle a>b} . The same is true for the t i {\displaystyle t_{i}} after the first few terms, for the same reason. Furthermore, it is easy to see that q k ≥ 2 {\displaystyle q_{k}\geq 2} (when a and b are both positive and gcd ( a , b ) ≠ min ( a , b ) {\displaystyle \gcd(a,b)\neq \min(a,b)} ). Thus, noticing that | s k + 1 | = | s k − 1 | + q k | s k | {\displaystyle |s_{k+1}|=|s_{k-1}|+q_{k}|s_{k}|} , we obtain | s k + 1 | = | b gcd ( a , b ) | ≥ 2 | s k | and | t k + 1 | = | a gcd ( a , b ) | ≥ 2 | t k | . {\displaystyle |s_{k+1}|=\left|{\frac {b}{\gcd(a,b)}}\right|\geq 2|s_{k}|\qquad {\text{and}}\qquad |t_{k+1}|=\left|{\frac {a}{\gcd(a,b)}}\right|\geq 2|t_{k}|.} This, accompanied by the fact that s k , t k {\displaystyle s_{k},t_{k}} are larger than or equal to in absolute value than any previous s i {\displaystyle s_{i}} or t i {\displaystyle t_{i}} respectively completed the proof. == Polynomial extended Euclidean algorithm == For univariate polynomials with coefficients in a field, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality 0 ≤ r i + 1 < | r i | {\displaystyle 0\leq r_{i+1}<|r_{i}|} has to be replaced by an inequality on the degrees deg ⁡ r i + 1 < deg ⁡ r i . {\displaystyle \deg r_{i+1}<\deg r_{i}.} Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials. A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem. If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials (s, t) such that a s + b t = gcd ( a , b ) {\displaystyle as+bt=\gcd(a,b)} and deg ⁡ s < deg ⁡ b − deg ⁡ ( gcd ( a , b ) ) , deg ⁡ t < deg ⁡ a − deg ⁡ ( gcd ( a , b ) ) . {\displaystyle \deg s<\deg b-\deg(\gcd(a,b)),\quad \deg t<\deg a-\deg(\gcd(a,b)).} A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor. In mathematics, it is common to require that the greatest common divisor be a monic polynomial. To get this, it suffices to divide every element of the output by the leading coefficient of r k . {\displaystyle r_{k}.} This allows that, if a and b are coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. In computer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient. The second way to normalize the greatest common divisor in the case of polynomials with integer coefficients is to divide every output by the content of r k , {\displaystyle r_{k},} to get a primitive greatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation. A third approach consists in extending the algorithm of subresultant pseudo-remainder sequences in a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainder r i {\displaystyle r_{i}} is a subresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes a s + b t = Res ⁡ ( a , b ) , {\displaystyle as+bt=\operatorname {Res} (a,b),} where Res ⁡ ( a , b ) {\displaystyle \operatorname {Res} (a,b)} denotes the resultant of a and b. In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it. == Pseudocode == To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables. For simplicity, the following algorithm (and the other algorithms in this article) uses parallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one, (old_r, r) := (r, old_r - quotient * r) is equivalent to prov := r; r := old_r - quotient × prov; old_r := prov; and similarly for the other parallel assignments. This leads to the following code: function extended_gcd(a, b) (old_r, r) := (a, b) (old_s, s) := (1, 0) (old_t, t) := (0, 1) while r ≠ 0 do quotient := old_r div r (old_r, r) := (r, old_r − quotient × r) (old_s, s) := (s, old_s − quotient × s) (old_t, t) := (t, old_t − quotient × t) output "Bézout coefficients:", (old_s, old_t) output "greatest common divisor:", old_r output "quotients by the gcd:", (t, s) The quotients of a and b by their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if either a or b is zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed. Finally, notice that in Bézout's identity, a x + b y = gcd ( a , b ) {\displaystyle ax+by=\gcd(a,b)} , one can solve for y {\displaystyle y} given a , b , x , gcd ( a , b ) {\displaystyle a,b,x,\gcd(a,b)} . Thus, an optimization to the above algorithm is to compute only the s k {\displaystyle s_{k}} sequence (which yields the Bézout coefficient x {\displaystyle x} ), and then compute y {\displaystyle y} at the end: function extended_gcd(a, b) s := 0; old_s := 1 r := b; old_r := a while r ≠ 0 do quotient := old_r div r (old_r, r) := (r, old_r − quotient × r) (old_s, s) := (s, old_s − quotient × s) if b ≠ 0 then bezout_t := (old_r − old_s × a) div b else bezout_t := 0 output "Bézout coefficients:", (old_s, bezout_t) output "greatest common divisor:", old_r However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication of old_s * a in computation of bezout_t can overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together. == Simplification of fractions == A fraction ⁠a/b⁠ is in canonical simplified form if a and b are coprime and b is positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by if s = 0 then output "Division by zero" if s < 0 then s := −s; t := −t (for avoiding negative denominators) if s = 1 then output −t (for avoiding denominators equal to 1) output ⁠−t/s⁠ The proof of this algorithm relies on the fact that s and t are two coprime integers such that as + bt = 0, and thus a b = − t s {\displaystyle {\frac {a}{b}}=-{\frac {t}{s}}} . To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator. If b divides a evenly, the algorithm executes only one iteration, and we have s = 1 at the end of the algorithm. It is the only case where the output is an integer. == Computing multiplicative inverses in modular structures == The extended Euclidean algorithm is the essential tool for computing multiplicative inverses in modular structures, typically the modular integers and the algebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order. === Modular integers === If n is a positive integer, the ring Z/nZ may be identified with the set {0, 1, ..., n-1} of the remainders of Euclidean division by n, the addition and the multiplication consisting in taking the remainder by n of the result of the addition and the multiplication of integers. An element a of Z/nZ has a multiplicative inverse (that is, it is a unit) if it is coprime to n. In particular, if n is prime, a has a multiplicative inverse if it is not zero (modulo n). Thus Z/nZ is a field if and only if n is prime. Bézout's identity asserts that a and n are coprime if and only if there exist integers s and t such that n s + a t = 1 {\displaystyle ns+at=1} Reducing this identity modulo n gives a t ≡ 1 mod n . {\displaystyle at\equiv 1\mod n.} Thus t, or, more exactly, the remainder of the division of t by n, is the multiplicative inverse of a modulo n. To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient of n is not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower than n, one may use the fact that the integer t provided by the algorithm satisfies |t| < n. That is, if t < 0, one must add n to it at the end. This results in the pseudocode, in which the input n is an integer larger than 1. function inverse(a, n) t := 0; newt := 1 r := n; newr := a while newr ≠ 0 do quotient := r div newr (t, newt) := (newt, t − quotient × newt) (r, newr) := (newr, r − quotient × newr) if r > 1 then return "a is not invertible" if t < 0 then t := t + n return t === Simple algebraic field extensions === The extended Euclidean algorithm is also the main tool for computing multiplicative inverses in simple algebraic field extensions. An important case, widely used in cryptography and coding theory, is that of finite fields of non-prime order. In fact, if p is a prime number, and q = pd, the field of order q is a simple algebraic extension of the prime field of p elements, generated by a root of an irreducible polynomial of degree d. A simple algebraic extension L of a field K, generated by the root of an irreducible polynomial p of degree d may be identified to the quotient ring K [ X ] / ⟨ p ⟩ , {\displaystyle K[X]/\langle p\rangle ,} , and its elements are in bijective correspondence with the polynomials of degree less than d. The addition in L is the addition of polynomials. The multiplication in L is the remainder of the Euclidean division by p of the product of polynomials. Thus, to complete the arithmetic in L, it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm. The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less than d. Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements of K; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element of K. In the pseudocode which follows, p is a polynomial of degree greater than one, and a is a polynomial. function inverse(a, p) t := 0; newt := 1 r := p; newr := a while newr ≠ 0 do quotient := r div newr (r, newr) := (newr, r − quotient × newr) (t, newt) := (newt, t − quotient × newt) if degree(r) > 0 then return "Either p is not irreducible or a is a multiple of p" return (1/r) × t ==== Example ==== For example, if the polynomial used to define the finite field GF(28) is p = x8 + x4 + x3 + x + 1, and a = x6 + x4 + x + 1 is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2n, one has −z = z and z + z = 0 for every element z in the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed. Thus, the inverse is x7 + x6 + x3 + x, as can be confirmed by multiplying the two elements together, and taking the remainder by p of the result. == The case of more than two numbers == One can handle the case of more than two numbers iteratively. First we show that gcd ( a , b , c ) = gcd ( gcd ( a , b ) , c ) {\displaystyle \gcd(a,b,c)=\gcd(\gcd(a,b),c)} . To prove this let d = gcd ( a , b , c ) {\displaystyle d=\gcd(a,b,c)} . By definition of gcd d {\displaystyle d} is a divisor of a {\displaystyle a} and b {\displaystyle b} . Thus gcd ( a , b ) = k d {\displaystyle \gcd(a,b)=kd} for some k {\displaystyle k} . Similarly d {\displaystyle d} is a divisor of c {\displaystyle c} so c = j d {\displaystyle c=jd} for some j {\displaystyle j} . Let u = gcd ( k , j ) {\displaystyle u=\gcd(k,j)} . By our construction of u {\displaystyle u} , u d | a , b , c {\displaystyle ud|a,b,c} but since d {\displaystyle d} is the greatest divisor u {\displaystyle u} is a unit. And since u d = gcd ( gcd ( a , b ) , c ) {\displaystyle ud=\gcd(\gcd(a,b),c)} the result is proven. So if n a + m b = gcd ( a , b ) {\displaystyle na+mb=\gcd(a,b)} then there are x {\displaystyle x} and y {\displaystyle y} such that x gcd ( a , b ) + y c = gcd ( a , b , c ) {\displaystyle x\gcd(a,b)+yc=\gcd(a,b,c)} so the final equation will be x ( n a + m b ) + y c = ( x n ) a + ( x m ) b + y c = gcd ( a , b , c ) . {\displaystyle x(na+mb)+yc=(xn)a+(xm)b+yc=\gcd(a,b,c).\,} So then to apply to n numbers we use induction gcd ( a 1 , a 2 , … , a n ) = gcd ( a 1 , gcd ( a 2 , gcd ( a 3 , … , gcd ( a n − 1 , a n ) ) ) , … ) , {\displaystyle \gcd(a_{1},a_{2},\dots ,a_{n})=\gcd(a_{1},\,\gcd(a_{2},\,\gcd(a_{3},\dots ,\gcd(a_{n-1}\,,a_{n}))),\dots ),} with the equations following directly. == See also == Euclidean domain Linear congruence theorem Kuṭṭaka == References == Knuth, Donald. The Art of Computer Programming. Addison-Wesley. Volume 2, Chapter 4. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Pages 859–861 of section 31.2: Greatest common divisor. == External links == Source for the form of the algorithm used to determine the multiplicative inverse in GF(2^8)
Wikipedia/Extended_Euclidean_algorithm
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation). Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this are Fermat's Last Theorem, which was proved 358 years after the original formulation, and Goldbach's conjecture, which remains unsolved since the 18th century. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation of public-key cryptography algorithms. == History == Number theory is the branch of mathematics that studies integers and their properties and relations. The integers comprise a set that extends the set of natural numbers { 1 , 2 , 3 , … } {\displaystyle \{1,2,3,\dots \}} to include number 0 {\displaystyle 0} and the negation of natural numbers { − 1 , − 2 , − 3 , … } {\displaystyle \{-1,-2,-3,\dots \}} . Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Number theory is closely related to arithmetic and some authors use the terms as synonyms. However, the word "arithmetic" is used today to mean the study of numerical operations and extends to the real numbers. In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships. Traditionally, it is known as higher arithmetic. By the early twentieth century, the term number theory had been widely adopted. The term number means whole numbers, which refers to either the natural numbers or the integers. Elementary number theory studies aspects of integers that can be investigated using elementary methods such as elementary proofs. Analytic number theory, by contrast, relies on complex numbers and techniques from analysis and calculus. Algebraic number theory employs algebraic structures such as fields and rings to analyze the properties of and relations between numbers. Geometric number theory uses concepts from geometry to study numbers. Further branches of number theory are probabilistic number theory, combinatorial number theory, computational number theory, and applied number theory, which examines the application of number theory to science and technology. === Origins === ==== Ancient Mesopotamia ==== The earliest historical find of an arithmetical nature is a fragment of a table: Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers ( a , b , c ) {\displaystyle (a,b,c)} such that a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} . The triples are too numerous and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity ( 1 2 ( x − 1 x ) ) 2 + 1 = ( 1 2 ( x + 1 x ) ) 2 , {\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right)\right)^{2}+1=\left({\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)^{2},} which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by c / a {\displaystyle c/a} , presumably for actual use as a "table", for example, with a view to applications. It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems. Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind of Babylonian algebra was much more developed. ==== Ancient Greece ==== Although other civilizations probably influenced Greek mathematics at the beginning, all evidence of such borrowings appear relatively late, and it is likely that Greek arithmētikḗ (the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (the Archaic and Classical periods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the early Hellenistic period. In the case of number theory, this means largely Plato, Aristotle, and Euclid. Plato had a keen interest in mathematics, and distinguished clearly between arithmētikḗ and calculation (logistikē). Plato reports in his dialogue Theaetetus that Theodorus had proven that 3 , 5 , … , 17 {\displaystyle {\sqrt {3}},{\sqrt {5}},\dots ,{\sqrt {17}}} are irrational. Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: Platonem ferunt didicisse Pythagorea omnia ("They say Plato learned all things Pythagorean"). Euclid devoted part of his Elements (Books VII–IX) to topics that belong to elementary number theory, including prime numbers and divisibility. He gave an algorithm, the Euclidean algorithm, for computing the greatest common divisor of two numbers (Prop. VII.2) and a proof implying the infinitude of primes (Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it". This is all that is needed to prove that 2 {\displaystyle {\sqrt {2}}} is irrational. Pythagoreans apparently gave great importance to the odd and the even. The discovery that 2 {\displaystyle {\sqrt {2}}} is irrational is credited to the early Pythagoreans, sometimes assigned to Hippasus, who was expelled or split from the Pythagorean community as a result. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic) and lengths and proportions (which may be identified with real numbers, whether rational or not). The Pythagorean tradition also spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th centuries). An epigram published by Lessing in 1773 appears to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution. ===== Late Antiquity ===== Aside from the elementary work of Neopythagoreans such as Nicomachus and Theon of Smyrna, the foremost authority in arithmētikḗ in Late Antiquity was Diophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant: On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and the Arithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The Arithmetica is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form f ( x , y ) = z 2 {\displaystyle f(x,y)=z^{2}} or f ( x , y , z ) = w 2 {\displaystyle f(x,y,z)=w^{2}} . In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought. ==== Asia ==== The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an autochthonous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences n ≡ a 1 mod m 1 {\displaystyle n\equiv a_{1}{\bmod {m}}_{1}} , n ≡ a 2 mod m 2 {\displaystyle n\equiv a_{2}{\bmod {m}}_{2}} could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations. Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century). Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke. ==== Arithmetic in the Islamic golden age ==== In the early ninth century, the caliph al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta). Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912). Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem. ==== Western Europe in the Middle Ages ==== Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica. === Early modern number theory === ==== Fermat ==== Pierre de Fermat (1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area. Over his lifetime, Fermat made the following contributions to the field: One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day. In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer. Fermat's little theorem (1640): if a is not divisible by a prime p, then a p − 1 ≡ 1 mod p . {\displaystyle a^{p-1}\equiv 1{\bmod {p}}.} If a and b are coprime, then a 2 + b 2 {\displaystyle a^{2}+b^{2}} is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form a 2 + b 2 {\displaystyle a^{2}+b^{2}} . These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent. In 1657, Fermat posed the problem of solving x 2 − N y 2 = 1 {\displaystyle x^{2}-Ny^{2}=1} as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent. Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that x 4 + y 4 = z 4 {\displaystyle x^{4}+y^{4}=z^{4}} has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that x 3 + y 3 = z 3 {\displaystyle x^{3}+y^{3}=z^{3}} has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent). Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to x n + y n = z n {\displaystyle x^{n}+y^{n}=z^{n}} for all n ≥ 3 {\displaystyle n\geq 3} ; this claim appears in his annotations in the margins of his copy of Diophantus. ==== Euler ==== The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following: Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that p = x 2 + y 2 {\displaystyle p=x^{2}+y^{2}} if and only if p ≡ 1 mod 4 {\displaystyle p\equiv 1{\bmod {4}}} ; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to x 4 + y 4 = z 2 {\displaystyle x^{4}+y^{4}=z^{2}} (implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method). Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation. First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function. Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form x 2 + N y 2 {\displaystyle x^{2}+Ny^{2}} , some of it prefiguring quadratic reciprocity. Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated. ==== Lagrange, Legendre, and Gauss ==== Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to m X 2 + n Y 2 {\displaystyle mX^{2}+nY^{2}} ), including defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation a x 2 + b y 2 + c z 2 = 0 {\displaystyle ax^{2}+by^{2}+cz^{2}=0} and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for n = 5 {\displaystyle n=5} (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain). Carl Friedrich Gauss (1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. The Disquisitiones Arithmeticae (1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic. In this way, Gauss arguably made forays towards Évariste Galois's work and the area algebraic number theory. === Maturity and division into subfields === Starting early in the nineteenth century, the following developments gradually took place: The rise to self-consciousness of number theory (or higher arithmetic) as a field of study. The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra. The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory. Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms). The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize. == Main subdivisions == === Elementary number theory === Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic. Its primary subjects of study are divisibility, factorization, and primality, as well as congruences in modular arithmetic. Other topics in elementary number theory include Diophantine equations, continued fractions, integer partitions, and Diophantine approximations. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed the product, such as 2 × 3 = 6 {\displaystyle 2\times 3=6} . Divisibility is a property between two nonzero integers related to division. An integer a {\displaystyle a} is said to be divisible by a nonzero integer b {\displaystyle b} if a {\displaystyle a} is a multiple of b {\displaystyle b} ; that is, if there exists an integer q {\displaystyle q} such that a = b q {\displaystyle a=bq} . An equivalent formulation is that b {\displaystyle b} divides a {\displaystyle a} and is denoted by a vertical bar, which in this case is b | a {\displaystyle b|a} . Conversely, if this were not the case, then a {\displaystyle a} would not be divided evenly by b {\displaystyle b} , resulting in a remainder. Euclid's division lemma asserts that a {\displaystyle a} and b {\displaystyle b} can generally be written as a = b q + r {\displaystyle a=bq+r} , where the remainder r < b {\displaystyle r<b} accounts for the leftover quantity. Elementary number theory studies divisibility rules in order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimal digit sum is divisible by 3. A common divisor of several nonzero integers is an integer that divides all of them. The greatest common divisor (gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. The Euclidean algorithm computes the greatest common divisor of two integers a , b {\displaystyle a,b} by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. The algorithm can be extended to solve a special case of linear Diophantine equations a x + b y = 1 {\displaystyle ax+by=1} . A Diophantine equation is an equation with several unknowns and integer coefficients. Another kind of Diophantine equation is described in the Pythagorean theorem, x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} , whose solutions are called Pythagorean triples if they are all integers. Elementary number theory studies the divisibility properties of integers such as parity (even and odd numbers), prime numbers, and perfect numbers. Important number-theoric functions include the divisor-counting function, the divisor summatory function and its modifications, and Euler's totient function. A prime number is an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number. Euclid's theorem demonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. The sieve of Eratosthenes was devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers. Factorization is a method of expressing a number as a product. Specifically in number theory, integer factorization is the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known as prime factorization. A fundamental property of primes is shown in Euclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. The unique factorization theorem is the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example, 120 {\displaystyle 120} is expressed uniquely as 2 × 2 × 2 × 3 × 5 {\displaystyle 2\times 2\times 2\times 3\times 5} or simply 2 3 × 3 × 5 {\displaystyle 2^{3}\times 3\times 5} . Modular arithmetic works with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integers a , b {\displaystyle a,b} modulo n {\displaystyle n} (a positive integer called the modulus) is an equivalence relation whereby n | ( a − b ) {\displaystyle n|(a-b)} is true. Performing Euclidean division on both a {\displaystyle a} and n {\displaystyle n} , and on b {\displaystyle b} and n {\displaystyle n} , yields the same remainder. This written as a ≡ b ( mod n ) {\textstyle a\equiv b{\pmod {n}}} . In a manner analogous to the 12-hour clock, the sum of 4 and 9 is equal to 13, yet congruent to 1. A residue class modulo n {\displaystyle n} is a set that contains all integers congruent to a specified r {\displaystyle r} modulo n {\displaystyle n} . For example, 6 Z + 1 {\displaystyle 6\mathbb {Z} +1} contains all multiples of 6 incremented by 1. Modular arithmetic provides a range of formulas for rapidly solving congruences of very large powers. An influential theorem is Fermat's little theorem, which states that if a prime p {\displaystyle p} is coprime to some integer a {\displaystyle a} , then a p − 1 ≡ 1 ( mod p ) {\textstyle a^{p-1}\equiv 1{\pmod {p}}} is true. Euler's theorem extends this to assert that every integer n {\displaystyle n} satisfies the congruence a φ ( n ) ≡ 1 ( mod n ) , {\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},} where Euler's totient function φ {\displaystyle \varphi } counts all positive integers up to n {\displaystyle n} that are coprime to n {\displaystyle n} . Modular arithmetic also provides formulas that are used to solve congruences with unknowns in a similar vein to equation solving in algebra, such as the Chinese remainder theorem. === Analytic number theory === Analytic number theory, in contrast to elementary number theory, relies on complex numbers and techniques from analysis and calculus. Analytic number theory may be defined in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or in terms of its concerns, as the study within number theory of estimates on the size and density of certain numbers (e.g., primes), as opposed to identities. It studies the distribution of primes, behavior of number-theoric functions, and irrational numbers. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture, the twin prime conjecture, the Hardy–Littlewood conjectures, the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory. Analysis is the branch of mathematics that studies the limit, defined as the value to which a sequence or function tends as the argument (or index) approaches a specific value. For example, the limit of the sequence 0.9, 0.99, 0.999, ... is 1. In the context of functions, the limit of 1 x {\textstyle {\frac {1}{x}}} as x {\displaystyle x} approaches infinity is 0. The complex numbers extend the real numbers with the imaginary unit i {\displaystyle i} defined as the solution to i 2 = − 1 {\displaystyle i^{2}=-1} . Every complex number can be expressed as x + i y {\displaystyle x+iy} , where x {\displaystyle x} is called the real part and y {\displaystyle y} is called the imaginary part. The distribution of primes, described by the function π {\displaystyle \pi } that counts all primes up to a given real number, is unpredictable and is a major subject of study in number theory. Elementary formulas for a partial sequence of primes, including Euler's prime-generating polynomials have been developed. However, these cease to function as the primes become too large. The prime number theorem in analytic number theory provides a formalisation of the notion that prime numbers appear less commonly as their numerical value increases. One distribution states, informally, that the function x log ⁡ ( x ) {\displaystyle {\frac {x}{\log(x)}}} approximates π ( x ) {\displaystyle \pi (x)} . Another distribution involves an offset logarithmic integral which converges to π ( x ) {\displaystyle \pi (x)} more quickly. The zeta function has been demonstrated to be connected to the distribution of primes. It is defined as the series ζ ( s ) = ∑ n = 1 ∞ 1 n s = 1 1 s + 1 2 s + 1 3 s + ⋯ {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots } that converges if s {\displaystyle s} is greater than 1. Euler demonstrated a link involving the infinite product over all prime numbers, expressed as the identity ζ ( s ) = ∏ p prime ( 1 − 1 p s ) − 1 . {\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)^{-1}.} Riemann extended the definition to a complex variable and conjectured that all nontrivial cases ( 0 < ℜ ( s ) < 1 {\displaystyle 0<\Re (s)<1} ) where the function returns a zero are those in which the real part of s {\displaystyle s} is equal to 1 2 {\textstyle {\frac {1}{2}}} . He established a connection between the nontrivial zeroes and the prime-counting function. In what is now recognised as the unsolved Riemann hypothesis, a solution to it would imply direct consequences for understanding the distribution of primes. One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function. Elementary number theory works with elementary proofs, a term that excludes the use of complex numbers but may include basic analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous. For example, proofs based on complex Tauberian theorems, such as Wiener–Ikehara, are often seen as quite enlightening but not elementary despite using Fourier analysis, not complex analysis. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a more advanced proof. Some subjects generally considered to be part of analytic number theory (e.g., sieve theory) are better covered by the second rather than the first definition. Small sieves, for instance, use little analysis and yet still belong to analytic number theory. === Algebraic number theory === An algebraic number is any complex number that is a solution to some polynomial equation f ( x ) = 0 {\displaystyle f(x)=0} with rational coefficients; for example, every solution x {\displaystyle x} of x 5 + ( 11 / 2 ) x 3 − 7 x 2 + 9 = 0 {\displaystyle x^{5}+(11/2)x^{3}-7x^{2}+9=0} is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields. It could be argued that the simplest kind of number fields, namely quadratic fields, were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones Arithmeticae can be restated in terms of ideals and norms in quadratic fields. (A quadratic field consists of all numbers of the form a + b d {\displaystyle a+b{\sqrt {d}}} , where a {\displaystyle a} and b {\displaystyle b} are rational numbers and d {\displaystyle d} is a fixed rational number whose square root is not rational.) For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals and − 5 {\displaystyle {\sqrt {-5}}} , the number 6 {\displaystyle 6} can be factorised both as 6 = 2 ⋅ 3 {\displaystyle 6=2\cdot 3} and 6 = ( 1 + − 5 ) ( 1 − − 5 ) {\displaystyle 6=(1+{\sqrt {-5}})(1-{\sqrt {-5}})} ; all of 2 {\displaystyle 2} , 3 {\displaystyle 3} , 1 + − 5 {\displaystyle 1+{\sqrt {-5}}} and 1 − − 5 {\displaystyle 1-{\sqrt {-5}}} are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalizations of quadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K. (For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood. Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. === Diophantine geometry === The central problem of Diophantine geometry is to determine when a Diophantine equation has integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in n-dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely or infinitely many rational points on a given curve or surface. Consider, for instance, the Pythagorean equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} . One would like to know its rational solutions, namely ( x , y ) {\displaystyle (x,y)} such that x and y are both rational. This is the same as asking for all integer solutions to a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} ; any solution to the latter equation gives us a solution x = a / c {\displaystyle x=a/c} , y = b / c {\displaystyle y=b/c} to the former. It is also the same as asking for all points with rational coordinates on the curve described by x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} (a circle of radius 1 centered on the origin). The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation f ( x , y ) = 0 {\displaystyle f(x,y)=0} , where f {\displaystyle f} is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial. There is also the closely linked area of Diophantine approximations: given a number x {\displaystyle x} , determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call a / q {\displaystyle a/q} (with gcd ( a , q ) = 1 {\displaystyle \gcd(a,q)=1} ) a good approximation to x {\displaystyle x} if | x − a / q | < 1 q c {\displaystyle |x-a/q|<{\frac {1}{q^{c}}}} , where c {\displaystyle c} is large. This question is of special interest if x {\displaystyle x} is an algebraic number. If x {\displaystyle x} cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that π and e have been shown to be transcendental. Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry is a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations. === Other subfields === Probabilistic number theory starts with questions such as the following: Take an integer n at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will n have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average? Combinatorics in number theory starts with questions like the following: Does a fairly "thick" infinite set A {\displaystyle A} contain many elements in arithmetic progression: a {\displaystyle a} , a + b , a + 2 b , a + 3 b , … , a + 10 b {\displaystyle a+b,a+2b,a+3b,\ldots ,a+10b} ? Should it be possible to write large integers as sums of elements of A {\displaystyle A} ?There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. == Applications == For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations". Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis. Number theory has now several modern applications spanning diverse areas such as: Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis. Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics. Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes. Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory. Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2. == See also == Arithmetic dynamics Algebraic function field Arithmetic topology Finite field p-adic number List of number theoretic algorithms == Notes == == References == === Sources === This article incorporates material from the Citizendium article "Number theory", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL. == Further reading == Two of the most popular introductions to the subject are: Hardy, G. H.; Wright, E. M. (2008) [1938]. An introduction to the theory of numbers (rev. by D. R. Heath-Brown and J. H. Silverman, 6th ed.). Oxford University Press. ISBN 978-0-19-921986-5. Vinogradov, I. M. (2003) [1954]. Elements of Number Theory (reprint of the 1954 ed.). Mineola, NY: Dover Publications. Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol 1981). Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are: Ivan M. Niven; Herbert S. Zuckerman; Hugh L. Montgomery (2008) [1960]. An introduction to the theory of numbers (reprint of the 5th 1991 ed.). John Wiley & Sons. ISBN 978-81-265-1811-1. Retrieved 2016-02-28. Rosen, Kenneth H. (2010). Elementary Number Theory (6th ed.). Pearson Education. ISBN 978-0-321-71775-7. Retrieved 2016-02-28. Popular choices for a second textbook include: Borevich, A. I.; Shafarevich, Igor R. (1966). Number theory. Pure and Applied Mathematics. Vol. 20. Boston, MA: Academic Press. ISBN 978-0-12-117850-5. MR 0195803. Serre, Jean-Pierre (1996) [1973]. A course in arithmetic. Graduate Texts in Mathematics. Vol. 7. Springer. ISBN 978-0-387-90040-7. == External links == Number Theory entry in the Encyclopedia of Mathematics Number Theory Web
Wikipedia/Elementary_number_theory
In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. The theorem can be written as an equation relating the lengths of the sides a, b and the hypotenuse c, sometimes called the Pythagorean equation: a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} The theorem is named for the Greek philosopher Pythagoras, born around 570 BC. The theorem has been proved numerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years. When Euclidean space is represented by a Cartesian coordinate system in analytic geometry, Euclidean distance satisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can be generalized in various ways: to higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all but n-dimensional solids. == Proofs using constructed squares == === Rearrangement proofs === In one rearrangement proof, two squares are used whose sides have a measure of a + b {\displaystyle a+b} and which contain four right triangles whose sides are a, b and c, with the hypotenuse being c. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are length c. Each outer square has an area of ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + c 2 {\displaystyle 2ab+c^{2}} , with 2 a b {\displaystyle 2ab} representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of length a and b. These rectangles in their new position have now delineated two new squares, one having side length a is formed in the bottom-left corner, and another square of side length b formed in the top-right corner. In this new position, this left side now has a square of area ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . Since both squares have the area of ( a + b ) 2 {\displaystyle (a+b)^{2}} it follows that the other measure of the square area also equal each other such that 2 a b + c 2 {\displaystyle 2ab+c^{2}} = 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . With the area of the four triangles removed from both side of the equation what remains is a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areas a 2 {\displaystyle a^{2}} and b 2 {\displaystyle b^{2}} which will again lead to a second square of with the area 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . English mathematician Sir Thomas Heath gives this proof in his commentary on Proposition I.47 in Euclid's Elements, and mentions the proposals of German mathematicians Carl Anton Bretschneider and Hermann Hankel that Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him." Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues. === Algebraic proofs === The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with side c, as shown in the lower part of the diagram. This results in a larger square, with side a + b and area (a + b)2. The four triangles and the square side c must have the same area as the larger square, ( b + a ) 2 = c 2 + 4 a b 2 = c 2 + 2 a b , {\displaystyle (b+a)^{2}=c^{2}+4{\frac {ab}{2}}=c^{2}+2ab,} giving c 2 = ( b + a ) 2 − 2 a b = b 2 + 2 a b + a 2 − 2 a b = a 2 + b 2 . {\displaystyle c^{2}=(b+a)^{2}-2ab=b^{2}+2ab+a^{2}-2ab=a^{2}+b^{2}.} A similar proof uses four copies of a right triangle with sides a, b and c, arranged inside a square with side c as in the top half of the diagram. The triangles are similar with area 1 2 a b {\displaystyle {\tfrac {1}{2}}ab} , while the small square has side b − a and area (b − a)2. The area of the large square is therefore ( b − a ) 2 + 4 a b 2 = ( b − a ) 2 + 2 a b = b 2 − 2 a b + a 2 + 2 a b = a 2 + b 2 . {\displaystyle (b-a)^{2}+4{\frac {ab}{2}}=(b-a)^{2}+2ab=b^{2}-2ab+a^{2}+2ab=a^{2}+b^{2}.} But this is a square with side c and area c2, so c 2 = a 2 + b 2 . {\displaystyle c^{2}=a^{2}+b^{2}.} == Other proofs of the theorem == This theorem may have more known proofs than any other (the law of quadratic reciprocity being another contender for that distinction); the book The Pythagorean Proposition contains 370 proofs. === Proof using similar triangles === This proof is based on the proportionality of the sides of three similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles. Let ABC represent a right triangle, with the right angle located at C, as shown on the figure. Draw the altitude from point C, and call H its intersection with the side AB. Point H divides the length of the hypotenuse c into parts d and e. The new triangle, ACH, is similar to triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at A, meaning that the third angle will be the same in both triangles as well, marked as θ in the figure. By a similar reasoning, the triangle CBH is also similar to ABC. The proof of similarity of the triangles requires the triangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to the parallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: B C A B = B H B C and A C A B = A H A C . {\displaystyle {\frac {BC}{AB}}={\frac {BH}{BC}}{\text{ and }}{\frac {AC}{AB}}={\frac {AH}{AC}}.} The first result equates the cosines of the angles θ, whereas the second result equates their sines. These ratios can be written as B C 2 = A B × B H and A C 2 = A B × A H . {\displaystyle BC^{2}=AB\times BH{\text{ and }}AC^{2}=AB\times AH.} Summing these two equalities results in B C 2 + A C 2 = A B × B H + A B × A H = A B ( A H + B H ) = A B 2 , {\displaystyle BC^{2}+AC^{2}=AB\times BH+AB\times AH=AB(AH+BH)=AB^{2},} which, after simplification, demonstrates the Pythagorean theorem: B C 2 + A C 2 = A B 2 . {\displaystyle BC^{2}+AC^{2}=AB^{2}.} The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. One conjecture is that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in the Elements, and that the theory of proportions needed further development at that time. === Einstein's proof by dissection without rearrangement === Albert Einstein gave a proof by dissection in which the pieces do not need to be moved. Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and two similar shapes that each include one of two legs instead of the hypotenuse (see Similar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. === Euclid's proof === In outline, here is how the proof in Euclid's Elements proceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to be congruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. Let A, B, C be the vertices of a right triangle, with a right angle at A. Drop a perpendicular from A to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementary lemmata: If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent (side-angle-side). The area of a triangle is half the area of any parallelogram on the same base and having the same altitude. The area of a rectangle is equal to the product of two adjacent sides. The area of a square is equal to the product of two of its sides (follows from 3). Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square. The proof is as follows: Let ACB be a right-angled triangle with right angle CAB. On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate. From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively. Join CF and AD, to form the triangles BCF and BDA. Angles CAB and BAG are both right angles; therefore C, A, and G are collinear. Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC. Since AB is equal to FB, BD is equal to BC and angle ABD equals angle FBC, triangle ABD must be congruent to triangle FBC. Since A-K-L is a straight line, parallel to BD, then rectangle BDLK has twice the area of triangle ABD because they share the base BD and have the same altitude BK, i.e., a line normal to their common base, connecting the parallel lines BD and AL. (lemma 2) Since C is collinear with A and G, and this line is parallel to FB, then square BAGF must be twice in area to triangle FBC. Therefore, rectangle BDLK must have the same area as square BAGF = AB2. By applying steps 3 to 10 to the other side of the figure, it can be similarly shown that rectangle CKLE must have the same area as square ACIH = AC2. Adding these two results, AB2 + AC2 = BD × BK + KL × KC Since BD = KL, BD × BK + KL × KC = BD(BK + KC) = BD × BC Therefore, AB2 + AC2 = BC2, since CBDE is a square. This proof, which appears in Euclid's Elements as that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares. This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used. === Proofs by dissection and rearrangement === Another by rearrangement is given by the middle animation. A large square is formed with area c2, from four identical right triangles with sides a, b and c, fitted around a small central square. Then two rectangles are formed with sides a and b by moving the triangles. Combining the smaller square with these rectangles produces two squares of areas a2 and b2, which must have the same area as the initial large square. The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is called dissection. This shows the area of the large square equals that of the two smaller ones. === Proof by area-preserving shearing === As shown in the accompanying animation, area-preserving shear mappings and translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly. Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. === Other algebraic proofs === A related proof by U.S. President James A. Garfield was published before he was elected president; while he was a U.S. Representative. Instead of a square it uses a trapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. The area of the trapezoid can be calculated to be half the area of the square, that is 1 2 ( b + a ) 2 . {\displaystyle {\frac {1}{2}}(b+a)^{2}.} The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of 1 2 {\displaystyle {\frac {1}{2}}} , which is removed by multiplying by two to give the result. === Proof using differentials === One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing calculus. The triangle ABC is a right triangle, as shown in the upper part of the diagram, with BC the hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of length y, the side AC of length x and the side AB of length a, as seen in the lower diagram part. If x is increased by a small amount dx by extending the side AC slightly to D, then y also increases by dy. These form two sides of a triangle, CDE, which (with E chosen so CE is perpendicular to the hypotenuse) is a right triangle approximately similar to ABC. Therefore, the ratios of their sides must be the same, that is: d y d x = x y . {\displaystyle {\frac {dy}{dx}}={\frac {x}{y}}.} This can be rewritten as y d y = x d x {\displaystyle y\,dy=x\,dx} , which is a differential equation that can be solved by direct integration: ∫ y d y = ∫ x d x , {\displaystyle \int y\,dy=\int x\,dx\,,} giving y 2 = x 2 + C . {\displaystyle y^{2}=x^{2}+C.} The constant can be deduced from x = 0, y = a to give the equation y 2 = x 2 + a 2 . {\displaystyle y^{2}=x^{2}+a^{2}.} This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place of dx and dy. == Converse == The converse of the theorem is also true: Given a triangle with sides of length a, b, and c, if a2 + b2 = c2, then the angle between sides a and b is a right angle. For any three positive real numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c as a consequence of the converse of the triangle inequality. This converse appears in Euclid's Elements (Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right." It can be proved using the law of cosines or as follows: Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2. Construct a second triangle with sides of length a and b containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length c = √a2 + b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths a, b and c, the triangles are congruent and must have the same angles. Therefore, the angle between the side of lengths a and b in the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem. A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Let c be chosen to be the longest of the three sides and a + b > c (otherwise there is no triangle according to the triangle inequality). The following statements apply: If a2 + b2 = c2, then the triangle is right. If a2 + b2 > c2, then the triangle is acute. If a2 + b2 < c2, then the triangle is obtuse. Edsger W. Dijkstra has stated this proposition about acute, right, and obtuse triangles in this language: sgn(α + β − γ) = sgn(a2 + b2 − c2), where α is the angle opposite to side a, β is the angle opposite to side b, γ is the angle opposite to side c, and sgn is the sign function. == Consequences and uses of the theorem == === Pythagorean triples === A Pythagorean triple has three positive integers a, b, and c, such that a2 + b2 = c2. In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Such a triple is commonly written (a, b, c). Some well-known examples are (3, 4, 5) and (5, 12, 13). A primitive Pythagorean triple is one in which a, b and c are coprime (the greatest common divisor of a, b and c is 1). The following is a list of primitive Pythagorean triples with values less than 100: (3, 4, 5), (5, 12, 13), (7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65), (36, 77, 85), (39, 80, 89), (48, 55, 73), (65, 72, 97) There are many formulas for generating Pythagorean triples. Of these, Euclid's formula is the most well-known: given arbitrary positive integers m and n, the formula states that the integers a = m 2 − n 2 , b = 2 m n , c = m 2 + n 2 {\displaystyle a=m^{2}-n^{2},\quad \,b=2mn,\quad \,c=m^{2}+n^{2}} forms a Pythagorean triple. === Inverse Pythagorean theorem === Given a right triangle with sides a , b , c {\displaystyle a,b,c} and altitude d {\displaystyle d} (a line from the right angle and perpendicular to the hypotenuse c {\displaystyle c} ). The Pythagorean theorem has, a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} while the inverse Pythagorean theorem relates the two legs a , b {\displaystyle a,b} to the altitude d {\displaystyle d} , 1 a 2 + 1 b 2 = 1 d 2 {\displaystyle {\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}={\frac {1}{d^{2}}}} The equation can be transformed to, 1 ( x z ) 2 + 1 ( y z ) 2 = 1 ( x y ) 2 {\displaystyle {\frac {1}{(xz)^{2}}}+{\frac {1}{(yz)^{2}}}={\frac {1}{(xy)^{2}}}} where x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} for any non-zero real x , y , z {\displaystyle x,y,z} . If the a , b , d {\displaystyle a,b,d} are to be integers, the smallest solution a > b > d {\displaystyle a>b>d} is then 1 20 2 + 1 15 2 = 1 12 2 {\displaystyle {\frac {1}{20^{2}}}+{\frac {1}{15^{2}}}={\frac {1}{12^{2}}}} using the smallest Pythagorean triple 3 , 4 , 5 {\displaystyle 3,4,5} . The reciprocal Pythagorean theorem is a special case of the optic equation 1 p + 1 q = 1 r {\displaystyle {\frac {1}{p}}+{\frac {1}{q}}={\frac {1}{r}}} where the denominators are squares and also for a heptagonal triangle whose sides p , q , r {\displaystyle p,q,r} are square numbers. === Incommensurable lengths === One of the consequences of the Pythagorean theorem is that line segments whose lengths are incommensurable (so the ratio of which is not a rational number) can be constructed using a straightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by the square root operation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer. Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as √2, √3, √5 . For more detail, see Quadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit. According to one legend, Hippasus of Metapontum (ca. 470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable. A careful discussion of Hippasus's contributions is found in Fritz. === Complex numbers === For any complex number z = x + i y , {\displaystyle z=x+iy,} the absolute value or modulus is given by r = | z | = x 2 + y 2 . {\displaystyle r=|z|={\sqrt {x^{2}+y^{2}}}.} So the three quantities, r, x and y are related by the Pythagorean equation, r 2 = x 2 + y 2 . {\displaystyle r^{2}=x^{2}+y^{2}.} Note that r is defined to be a positive number or zero but x and y can be negative as well as positive. Geometrically r is the distance of the z from zero or the origin O in the complex plane. This can be generalised to find the distance between two points, z1 and z2 say. The required distance is given by | z 1 − z 2 | = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 , {\displaystyle |z_{1}-z_{2}|={\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}},} so again they are related by a version of the Pythagorean equation, | z 1 − z 2 | 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle |z_{1}-z_{2}|^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}.} === Euclidean distance === The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If (x1, y1) and (x2, y2) are points in the plane, then the distance between them, also called the Euclidean distance, is given by ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle {\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}}.} More generally, in Euclidean n-space, the Euclidean distance between two points, A = ( a 1 , a 2 , … , a n ) {\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})} and B = ( b 1 , b 2 , … , b n ) {\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})} , is defined, by generalization of the Pythagorean theorem, as: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle {\sqrt {(a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}}}={\sqrt {\sum _{i=1}^{n}(a_{i}-b_{i})^{2}}}.} If instead of Euclidean distance, the square of this value (the squared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle (a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}=\sum _{i=1}^{n}(a_{i}-b_{i})^{2}.} The squared form is a smooth, convex function of both points, and is widely used in optimization theory and statistics, forming the basis of least squares. === Euclidean distance in other coordinate systems === If Cartesian coordinates are not used, for example, if polar coordinates are used in two dimensions or, in more general terms, if curvilinear coordinates are used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in the applications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates (r, θ) can be introduced as: x = r cos ⁡ θ , y = r sin ⁡ θ . {\displaystyle x=r\cos \theta ,\ y=r\sin \theta .} Then two points with locations (r1, θ1) and (r2, θ2) are separated by a distance s: s 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 = ( r 1 cos ⁡ θ 1 − r 2 cos ⁡ θ 2 ) 2 + ( r 1 sin ⁡ θ 1 − r 2 sin ⁡ θ 2 ) 2 . {\displaystyle s^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}=(r_{1}\cos \theta _{1}-r_{2}\cos \theta _{2})^{2}+(r_{1}\sin \theta _{1}-r_{2}\sin \theta _{2})^{2}.} Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: s 2 = r 1 2 + r 2 2 − 2 r 1 r 2 ( cos ⁡ θ 1 cos ⁡ θ 2 + sin ⁡ θ 1 sin ⁡ θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos ⁡ ( θ 1 − θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos ⁡ Δ θ , {\displaystyle {\begin{aligned}s^{2}&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\left(\cos \theta _{1}\cos \theta _{2}+\sin \theta _{1}\sin \theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \left(\theta _{1}-\theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \Delta \theta ,\end{aligned}}} using the trigonometric product-to-sum formulas. This formula is the law of cosines, sometimes called the generalized Pythagorean theorem. From this result, for the case where the radii to the two locations are at right angles, the enclosed angle Δθ = π/2, and the form corresponding to Pythagoras' theorem is regained: s 2 = r 1 2 + r 2 2 . {\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.} The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. === Pythagorean trigonometric identity === In a right triangle with sides a, b and hypotenuse c, trigonometry determines the sine and cosine of the angle θ between side a and the hypotenuse as: sin ⁡ θ = b c , cos ⁡ θ = a c . {\displaystyle \sin \theta ={\frac {b}{c}},\quad \cos \theta ={\frac {a}{c}}.} From that it follows: cos 2 θ + sin 2 θ = a 2 + b 2 c 2 = 1 , {\displaystyle {\cos }^{2}\theta +{\sin }^{2}\theta ={\frac {a^{2}+b^{2}}{c^{2}}}=1,} where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity. In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of size sin θ and adjacent side of size cos θ in units of the hypotenuse. === Relation to the cross product === The Pythagorean theorem relates the cross product and dot product in a similar way: ‖ a × b ‖ 2 + ( a ⋅ b ) 2 = ‖ a ‖ 2 ‖ b ‖ 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}+(\mathbf {a} \cdot \mathbf {b} )^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}.} This can be seen from the definitions of the cross product and dot product, as a × b = a b n sin ⁡ θ a ⋅ b = a b cos ⁡ θ , {\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} &=ab\mathbf {n} \sin {\theta }\\\mathbf {a} \cdot \mathbf {b} &=ab\cos {\theta },\end{aligned}}} with n a unit vector normal to both a and b. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained ‖ a × b ‖ 2 = ‖ a ‖ 2 ‖ b ‖ 2 − ( a ⋅ b ) 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}-(\mathbf {a} \cdot \mathbf {b} )^{2}.} This can be considered as a condition on the cross product and so part of its definition, for example in seven dimensions. === As an axiom === If the first four of the Euclidean geometry axioms are assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is, Euclid's fifth postulate implies the Pythagorean theorem and vice-versa. == Generalizations == === Similar figures on the three sides === The Pythagorean theorem generalizes beyond the areas of squares on the three sides to any similar figures. This was known by Hippocrates of Chios in the 5th century BC, and was included by Euclid in his Elements: If one erects similar figures (see Euclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures are a:b:c). While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle). The basic idea behind this generalization is that the area of a plane figure is proportional to the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areas A, B and C are erected on sides with corresponding lengths a, b and c then: A a 2 = B b 2 = C c 2 , {\displaystyle {\frac {A}{a^{2}}}={\frac {B}{b^{2}}}={\frac {C}{c^{2}}}\,,} ⇒ A + B = a 2 c 2 C + b 2 c 2 C . {\displaystyle \Rightarrow A+B={\frac {a^{2}}{c^{2}}}C+{\frac {b^{2}}{c^{2}}}C\,.} But, by the Pythagorean theorem, a2 + b2 = c2, so A + B = C. Conversely, if we can prove that A + B = C for three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangle C on its hypotenuse, and two similar right triangles (A and B ) constructed on the other two sides, formed by dividing the central triangle by its altitude. The sum of the areas of the two smaller triangles therefore is that of the third, thus A + B = C and reversing the above logic leads to the Pythagorean theorem a2 + b2 = c2. (See also Einstein's proof by dissection without rearrangement) === Law of cosines === The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states that a 2 + b 2 − 2 a b cos ⁡ θ = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}} where θ {\displaystyle \theta } is the angle between sides a {\displaystyle a} and b {\displaystyle b} . When θ {\displaystyle \theta } is π 2 {\displaystyle {\frac {\pi }{2}}} radians or 90°, then cos ⁡ θ = 0 {\displaystyle \cos {\theta }=0} , and the formula reduces to the usual Pythagorean theorem. === Arbitrary triangle === At any selected angle of a general triangle of sides a, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeled c. Inscribing the isosceles triangle forms triangle CAD with angle θ opposite side b and with side r along c. A second triangle is formed with angle θ opposite side a and a side with length s along c, as shown in the figure. Thābit ibn Qurra stated that the sides of the three triangles were related as: a 2 + b 2 = c ( r + s ) . {\displaystyle a^{2}+b^{2}=c(r+s)\ .} As the angle θ approaches π/2, the base of the isosceles triangle narrows, and lengths r and s overlap less and less. When θ = π/2, ADB becomes a right triangle, r + s = c, and the original Pythagorean theorem is regained. One proof observes that triangle ABC has the same angles as triangle CAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by the triangle postulate.) Consequently, ABC is similar to the reflection of CAD, the triangle DAC in the lower panel. Taking the ratio of sides opposite and adjacent to θ, c b = b r . {\displaystyle {\frac {c}{b}}={\frac {b}{r}}\ .} Likewise, for the reflection of the other triangle, c a = a s . {\displaystyle {\frac {c}{a}}={\frac {a}{s}}\ .} Clearing fractions and adding these two relations: c s + c r = a 2 + b 2 , {\displaystyle cs+cr=a^{2}+b^{2}\ ,} the required result. The theorem remains valid if the angle θ {\displaystyle \theta } is obtuse so the lengths r and s are non-overlapping. === General triangles using parallelograms === Pappus's area theorem is a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization by Pappus of Alexandria in 4 AD The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same base b and height h. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. === Solid geometry === In terms of solid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider the cuboid shown in the figure. The length of face diagonal AC is found from Pythagoras' theorem as: A C ¯ 2 = A B ¯ 2 + B C ¯ 2 , {\displaystyle {\overline {AC}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}\,,} where these three sides form a right triangle. Using diagonal AC and the horizontal edge CD, the length of body diagonal AD then is found by a second application of Pythagoras' theorem as: A D ¯ 2 = A C ¯ 2 + C D ¯ 2 , {\displaystyle {\overline {AD}}^{\,2}={\overline {AC}}^{\,2}+{\overline {CD}}^{\,2}\,,} or, doing it all in one step: A D ¯ 2 = A B ¯ 2 + B C ¯ 2 + C D ¯ 2 . {\displaystyle {\overline {AD}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}+{\overline {CD}}^{\,2}\,.} This result is the three-dimensional expression for the magnitude of a vector v (the diagonal AD) in terms of its orthogonal components {vk} (the three mutually perpendicular sides): ‖ v ‖ 2 = ∑ k = 1 3 ‖ v k ‖ 2 . {\displaystyle \|\mathbf {v} \|^{2}=\sum _{k=1}^{3}\|\mathbf {v} _{k}\|^{2}.} This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions is de Gua's theorem, named for Jean Paul de Gua de Malves: If a tetrahedron has a right angle corner (like a corner of a cube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem": Let x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} be orthogonal vectors in Rn. Consider the n-dimensional simplex S with vertices 0 , x 1 , … , x n {\displaystyle 0,x_{1},\ldots ,x_{n}} . (Think of the (n − 1)-dimensional simplex with vertices x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} not including the origin as the "hypotenuse" of S and the remaining (n − 1)-dimensional faces of S as its "legs".) Then the square of the volume of the hypotenuse of S is the sum of the squares of the volumes of the n legs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording: Given an n-rectangular n-dimensional simplex, the square of the (n − 1)-content of the facet opposing the right vertex will equal the sum of the squares of the (n − 1)-contents of the remaining facets. === Inner product spaces === The Pythagorean theorem can be generalized to inner product spaces, which are generalizations of the familiar 2-dimensional and 3-dimensional Euclidean spaces. For example, a function may be considered as a vector with infinitely many components in an inner product space, as in functional analysis. In an inner product space, the concept of perpendicularity is replaced by the concept of orthogonality: two vectors v and w are orthogonal if their inner product ⟨ v , w ⟩ {\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle } is zero. The inner product is a generalization of the dot product of vectors. The dot product is called the standard inner product or the Euclidean inner product. However, other inner products are possible. The concept of length is replaced by the concept of the norm ‖v‖ of a vector v, defined as: ‖ v ‖ ≡ ⟨ v , v ⟩ . {\displaystyle \lVert \mathbf {v} \rVert \equiv {\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}\,.} In an inner-product space, the Pythagorean theorem states that for any two orthogonal vectors v and w we have ‖ v + w ‖ 2 = ‖ v ‖ 2 + ‖ w ‖ 2 . {\displaystyle \left\|\mathbf {v} +\mathbf {w} \right\|^{2}=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2}.} Here the vectors v and w are akin to the sides of a right triangle with hypotenuse given by the vector sum v + w. This form of the Pythagorean theorem is a consequence of the properties of the inner product: ‖ v + w ‖ 2 = ⟨ v + w , v + w ⟩ = ⟨ v , v ⟩ + ⟨ w , w ⟩ + ⟨ v , w ⟩ + ⟨ w , v ⟩ = ‖ v ‖ 2 + ‖ w ‖ 2 , {\displaystyle {\begin{aligned}\left\|\mathbf {v} +\mathbf {w} \right\|^{2}&=\langle \mathbf {v+w} ,\ \mathbf {v+w} \rangle \\[3mu]&=\langle \mathbf {v} ,\ \mathbf {v} \rangle +\langle \mathbf {w} ,\ \mathbf {w} \rangle +\langle \mathbf {v,\ w} \rangle +\langle \mathbf {w,\ v} \rangle \\[3mu]&=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2},\end{aligned}}} where ⟨ v , w ⟩ = ⟨ w , v ⟩ = 0 {\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0} because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is the parallelogram law: 2 ‖ v ‖ 2 + 2 ‖ w ‖ 2 = ‖ v + w ‖ 2 + ‖ v − w ‖ 2 , {\displaystyle 2\|\mathbf {v} \|^{2}+2\|\mathbf {w} \|^{2}=\|\mathbf {v+w} \|^{2}+\|\mathbf {v-w} \|^{2}\ ,} which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality is ipso facto a norm corresponding to an inner product. The Pythagorean identity can be extended to sums of more than two orthogonal vectors. If v1, v2, ..., vn are pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section on solid geometry) results in the equation ‖ ∑ k = 1 n v k ‖ 2 = ∑ k = 1 n ‖ v k ‖ 2 {\displaystyle {\biggl \|}\sum _{k=1}^{n}\mathbf {v} _{k}{\biggr \|}^{2}=\sum _{k=1}^{n}\|\mathbf {v} _{k}\|^{2}} === Sets of m-dimensional objects in n-dimensional space === Another generalization of the Pythagorean theorem applies to Lebesgue-measurable sets of objects in any number of dimensions. Specifically, the square of the measure of an m-dimensional set of objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space is equal to the sum of the squares of the measures of the orthogonal projections of the object(s) onto all m-dimensional coordinate subspaces. In mathematical terms: μ m s 2 = ∑ i = 1 x μ 2 m p i {\displaystyle \mu _{ms}^{2}=\sum _{i=1}^{x}\mathbf {\mu ^{2}} _{mp_{i}}} where: μ m {\displaystyle \mu _{m}} is a measure in m-dimensions (a length in one dimension, an area in two dimensions, a volume in three dimensions, etc.). s {\displaystyle s} is a set of one or more non-overlapping m-dimensional objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space. μ m s {\displaystyle \mu _{ms}} is the total measure (sum) of the set of m-dimensional objects. p {\displaystyle p} represents an m-dimensional projection of the original set onto an orthogonal coordinate subspace. μ m p i {\displaystyle \mu _{mp_{i}}} is the measure of the m-dimensional set projection onto m-dimensional coordinate subspace i {\displaystyle i} . Because object projections can overlap on a coordinate subspace, the measure of each object projection in the set must be calculated individually, then measures of all projections added together to provide the total measure for the set of projections on the given coordinate subspace. x {\displaystyle x} is the number of orthogonal, m-dimensional coordinate subspaces in n-dimensional space (Rn) onto which the m-dimensional objects are projected (m ≤ n): x = ( n m ) = n ! m ! ( n − m ) ! {\displaystyle x={\binom {n}{m}}={\frac {n!}{m!(n-m)!}}} === Non-Euclidean geometry === The Pythagorean theorem is derived from the axioms of Euclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theorem implies, and is implied by, Euclid's Parallel (Fifth) Postulate. Thus, right triangles in a non-Euclidean geometry do not satisfy the Pythagorean theorem. For example, in spherical geometry, all three sides of the right triangle (say a, b, and c) bounding an octant of the unit sphere have length equal to π/2, and all its angles are right angles, which violates the Pythagorean theorem because a 2 + b 2 = 2 c 2 > c 2 {\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}} . Here two cases of non-Euclidean geometry are considered—spherical geometry and hyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, say A+B = C. The sides are then related as follows: the sum of the areas of the circles with diameters a and b equals the area of the circle with diameter c. ==== Spherical geometry ==== For any right triangle on a sphere of radius R (for example, if γ in the figure is a right angle), with sides a, b, c, the relation between the sides takes the form: cos ⁡ c R = cos ⁡ a R cos ⁡ b R . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}.} This equation can be derived as a special case of the spherical law of cosines that applies to all spherical triangles: cos ⁡ c R = cos ⁡ a R cos ⁡ b R + sin ⁡ a R sin ⁡ b R cos ⁡ γ . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}+\sin {\frac {a}{R}}\,\sin {\frac {b}{R}}\,\cos {\gamma }.} For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengths a, b, and c on a sphere with expanding radius R. As R approaches infinity the quantities a/R, b/R, and c/R tend to zero and the spherical Pythagorean identity reduces to 1 = 1 , {\displaystyle 1=1,} so we must look at its asymptotic expansion. The Maclaurin series for the cosine function can be written as cos ⁡ x = 1 − 1 2 x 2 + O ( x 4 ) {\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}} with the remainder term in big O notation. Letting x = c / R {\displaystyle x=c/R} be a side of the triangle, and treating the expression as an asymptotic expansion in terms of R for a fixed c, cos ⁡ c R = 1 − c 2 2 R 2 + O ( R − 4 ) {\displaystyle {\begin{aligned}\cos {\frac {c}{R}}=1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\end{aligned}}} and likewise for a and b. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields 1 − c 2 2 R 2 + O ( R − 4 ) = ( 1 − a 2 2 R 2 + O ( R − 4 ) ) ( 1 − b 2 2 R 2 + O ( R − 4 ) ) = 1 − a 2 2 R 2 − b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\begin{aligned}1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}&=\left(1-{\frac {a^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\left(1-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\\&=1-{\frac {a^{2}}{2R^{2}}}-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.\end{aligned}}} Subtracting 1 and then negating each side, c 2 2 R 2 = a 2 2 R 2 + b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\frac {c^{2}}{2R^{2}}}={\frac {a^{2}}{2R^{2}}}+{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.} Multiplying through by 2R2, the asymptotic expansion for c in terms of fixed a, b and variable R is c 2 = a 2 + b 2 + O ( R − 2 ) . {\displaystyle c^{2}=a^{2}+b^{2}+O{\left(R^{-2}\right)}.} The Euclidean Pythagorean relationship c 2 = a 2 + b 2 {\textstyle c^{2}=a^{2}+b^{2}} is recovered in the limit, as the remainder vanishes when the radius R approaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identity cos ⁡ 2 θ = 1 − 2 sin 2 ⁡ θ {\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }} to avoid loss of significance. Then the spherical Pythagorean theorem can alternately be written as sin 2 ⁡ c 2 R = sin 2 ⁡ a 2 R + sin 2 ⁡ b 2 R − 2 sin 2 ⁡ a 2 R sin 2 ⁡ b 2 R . {\displaystyle \sin ^{2}{\frac {c}{2R}}=\sin ^{2}{\frac {a}{2R}}+\sin ^{2}{\frac {b}{2R}}-2\sin ^{2}{\frac {a}{2R}}\,\sin ^{2}{\frac {b}{2R}}.} ==== Hyperbolic geometry ==== In a hyperbolic space with uniform Gaussian curvature −1/R2, for a right triangle with legs a, b, and hypotenuse c, the relation between the sides takes the form: cosh ⁡ c R = cosh ⁡ a R cosh ⁡ b R {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\,\cosh {\frac {b}{R}}} where cosh is the hyperbolic cosine. This formula is a special form of the hyperbolic law of cosines that applies to all hyperbolic triangles: cosh ⁡ c R = cosh ⁡ a R cosh ⁡ b R − sinh ⁡ a R sinh ⁡ b R cos ⁡ γ , {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\ \cosh {\frac {b}{R}}-\sinh {\frac {a}{R}}\ \sinh {\frac {b}{R}}\ \cos \gamma \ ,} with γ the angle at the vertex opposite the side c. By using the Maclaurin series for the hyperbolic cosine, cosh x ≈ 1 + x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, as a, b, and c all approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles (a, b << R), the hyperbolic cosines can be eliminated to avoid loss of significance, giving sinh 2 ⁡ c 2 R = sinh 2 ⁡ a 2 R + sinh 2 ⁡ b 2 R + 2 sinh 2 ⁡ a 2 R sinh 2 ⁡ b 2 R . {\displaystyle \sinh ^{2}{\frac {c}{2R}}=\sinh ^{2}{\frac {a}{2R}}+\sinh ^{2}{\frac {b}{2R}}+2\sinh ^{2}{\frac {a}{2R}}\sinh ^{2}{\frac {b}{2R}}\,.} ==== Very small triangles ==== For any uniform curvature K (positive, zero, or negative), in very small right triangles (|K|a2, |K|b2 << 1) with hypotenuse c, it can be shown that c 2 = a 2 + b 2 − K 3 a 2 b 2 − K 2 45 a 2 b 2 ( a 2 + b 2 ) − 2 K 3 945 a 2 b 2 ( a 2 − b 2 ) 2 + O ( K 4 c 10 ) . {\displaystyle c^{2}=a^{2}+b^{2}-{\frac {K}{3}}a^{2}b^{2}-{\frac {K^{2}}{45}}a^{2}b^{2}(a^{2}+b^{2})-{\frac {2K^{3}}{945}}a^{2}b^{2}(a^{2}-b^{2})^{2}+O(K^{4}c^{10})\,.} === Differential geometry === The Pythagorean theorem applies to infinitesimal triangles seen in differential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies d s 2 = d x 2 + d y 2 + d z 2 , {\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2},} with ds the element of distance and (dx, dy, dz) the components of the vector separating the two points. Such a space is called a Euclidean space. However, in Riemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form: d s 2 = ∑ i , j n g i j d x i d x j {\displaystyle ds^{2}=\sum _{i,j}^{n}g_{ij}\,dx_{i}\,dx_{j}} which is called the metric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficients gij.) It may be a function of position, and often describes curved space. A simple example is Euclidean (flat) space expressed in curvilinear coordinates. For example, in polar coordinates: d s 2 = d r 2 + r 2 d θ 2 . {\displaystyle ds^{2}=dr^{2}+r^{2}d\theta ^{2}\ .} == History == There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians of Mesopotamian mathematics have concluded that the Pythagorean rule was in widespread use during the Old Babylonian period (20th to 16th centuries BC), over a thousand years before Pythagoras was born. The history of the theorem can be divided into four parts: knowledge of Pythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within some deductive system. Written c. 1800 BC, the Egyptian Middle Kingdom Berlin Papyrus 6619 includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tablet Plimpton 322, written near Larsa also c. 1800 BC, contains many entries closely related to Pythagorean triples. In India, the Baudhayana Shulba Sutra, the dates of which are given variously as between the 8th and 5th century BC, contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of the isosceles right triangle and in the general case, as does the Apastamba Shulba Sutra (c. 600 BC). Byzantine Neoplatonic philosopher and mathematician Proclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed to Plato, the other to Pythagoras", for generating special Pythagorean triples. The rule attributed to Pythagoras (c. 570 – c. 495 BC) starts from an odd number and produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According to Thomas L. Heath (1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived. However, when authors such as Plutarch and Cicero attributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted. Classicist Kurt von Fritz wrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period of Pythagorean mathematics." Around 300 BC, in Euclid's Elements, the oldest extant axiomatic proof of the theorem is presented. With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, the Chinese text Zhoubi Suanjing (周髀算经), (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理). During the Han Dynasty (202 BC to 220 AD), Pythagorean triples appear in The Nine Chapters on the Mathematical Art, together with a mention of right triangles. Some believe the theorem arose first in China in the 11th century BC, where it is alternatively known as the "Shang Gao theorem" (商高定理), named after the Duke of Zhou's astronomer and mathematician, whose reasoning composed most of what was in the Zhoubi Suanjing. == See also == == Notes and references == === Notes === === References === === Works cited === == External links == Euclid (1997) [c. 300 BC]. David E. Joyce (ed.). Elements. Retrieved 2006-08-30. In HTML with Java-based interactive figures. "Pythagorean theorem". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. History topic: Pythagoras's theorem in Babylonian mathematics Interactive links: Interactive proof in Java of the Pythagorean theorem Another interactive proof in Java of the Pythagorean theorem Pythagorean theorem with interactive animation Animated, non-algebraic, and user-paced Pythagorean theorem Pythagorean theorem water demo on YouTube Pythagorean theorem (more than 70 proofs from cut-the-knot) Weisstein, Eric W. "Pythagorean theorem". MathWorld.
Wikipedia/Pythagorean_equation
In number theory, the first Hardy–Littlewood conjecture states the asymptotic formula for the number of prime k-tuples less than a given magnitude by generalizing the prime number theorem. It was first proposed by G. H. Hardy and John Edensor Littlewood in 1923. == Statement == Let m 1 , m 2 , … , m k {\displaystyle m_{1},m_{2},\ldots ,m_{k}} be positive even integers such that the numbers of the sequence P = ( p , p + m 1 , p + m 2 , … , p + m k ) {\displaystyle P=(p,p+m_{1},p+m_{2},\ldots ,p+m_{k})} do not form a complete residue class with respect to any prime and let π P ( n ) {\displaystyle \pi _{P}(n)} denote the number of primes p {\displaystyle p} less than n {\displaystyle n} st. p + m 1 , p + m 2 , … , p + m k {\displaystyle p+m_{1},p+m_{2},\ldots ,p+m_{k}} are all prime. Then π P ( n ) ∼ C P ∫ 2 n d t log k + 1 ⁡ t , {\displaystyle \pi _{P}(n)\sim C_{P}\int _{2}^{n}{\frac {dt}{\log ^{k+1}t}},} where C P = 2 k ∏ q prime, q ≥ 3 1 − w ( q ; m 1 , m 2 , … , m k ) q ( 1 − 1 q ) k + 1 {\displaystyle C_{P}=2^{k}\prod _{q{\text{ prime,}} \atop q\geq 3}{\frac {1-{\frac {w(q;m_{1},m_{2},\ldots ,m_{k})}{q}}}{\left(1-{\frac {1}{q}}\right)^{k+1}}}} is a product over odd primes and w ( q ; m 1 , m 2 , … , m k ) {\displaystyle w(q;m_{1},m_{2},\ldots ,m_{k})} denotes the number of distinct residues of 0 , m 1 , m 2 , … , m k {\displaystyle 0,m_{1},m_{2},\ldots ,m_{k}} modulo q {\displaystyle q} . The case k = 1 {\displaystyle k=1} and m 1 = 2 {\displaystyle m_{1}=2} is related to the twin prime conjecture. Specifically if π 2 ( n ) {\displaystyle \pi _{2}(n)} denotes the number of twin primes less than n then π 2 ( n ) ∼ C 2 ∫ 2 n d t log 2 ⁡ t , {\displaystyle \pi _{2}(n)\sim C_{2}\int _{2}^{n}{\frac {dt}{\log ^{2}t}},} where C 2 = 2 ∏ q prime, q ≥ 3 ( 1 − 1 ( q − 1 ) 2 ) ≈ 1.320323632 … {\displaystyle C_{2}=2\prod _{\textstyle {q{\text{ prime,}} \atop q\geq 3}}\left(1-{\frac {1}{(q-1)^{2}}}\right)\approx 1.320323632\ldots } is the twin prime constant. == Skewes' number == The Skewes' numbers for prime k-tuples are an extension of the definition of Skewes' number to prime k-tuples based on the first Hardy–Littlewood conjecture. The first prime p that violates the Hardy–Littlewood inequality for the k-tuple P, i.e., such that π P ( p ) > C P li P ⁡ ( p ) , {\displaystyle \pi _{P}(p)>C_{P}\operatorname {li} _{P}(p),} (if such a prime exists) is the Skewes number for P. == Consequences == The conjecture has been shown to be inconsistent with the second Hardy–Littlewood conjecture. == Generalizations == The Bateman–Horn conjecture generalizes the first Hardy–Littlewood conjecture to polynomials of degree higher than 1. == Notes == == References == Aletheia-Zomlefer, Soren Laing; Fukshansky, Lenny; Garcia, Stephan Ramon (2020). "The Bateman–Horn conjecture: Heuristic, history, and applications". Expositiones Mathematicae. 38 (4): 430–479. doi:10.1016/j.exmath.2019.04.005. ISSN 0723-0869. Tóth, László (January 2019). "On the Asymptotic Density of Prime k-tuples and a Conjecture of Hardy and Littlewood". Computational Methods in Science and Technology. 25 (3): 143–138. arXiv:1910.02636. doi:10.12921/cmst.2019.0000033.
Wikipedia/Hardy–Littlewood_conjecture
In mathematics, an algebraic surface is an algebraic variety of dimension two. In the case of geometry over the field of complex numbers, an algebraic surface has complex dimension two (as a complex manifold, when it is non-singular) and so of dimension four as a smooth manifold. The theory of algebraic surfaces is much more complicated than that of algebraic curves (including the compact Riemann surfaces, which are genuine surfaces of (real) dimension two). Many results were obtained, but, in the Italian school of algebraic geometry , and are up to 100 years old. == Classification by the Kodaira dimension == In the case of dimension one, varieties are classified by only the topological genus, but, in dimension two, one needs to distinguish the arithmetic genus p a {\displaystyle p_{a}} and the geometric genus p g {\displaystyle p_{g}} because one cannot distinguish birationally only the topological genus. Then, irregularity is introduced for the classification of varieties. A summary of the results (in detail, for each kind of surface refers to each redirection), follows: Examples of algebraic surfaces include (κ is the Kodaira dimension): κ = −∞: the projective plane, quadrics in P3, cubic surfaces, Veronese surface, del Pezzo surfaces, ruled surfaces κ = 0 : K3 surfaces, abelian surfaces, Enriques surfaces, hyperelliptic surfaces κ = 1: elliptic surfaces κ = 2: surfaces of general type. For more examples see the list of algebraic surfaces. The first five examples are in fact birationally equivalent. That is, for example, a cubic surface has a function field isomorphic to that of the projective plane, being the rational functions in two indeterminates. The Cartesian product of two curves also provides examples. == Birational geometry of surfaces == The birational geometry of algebraic surfaces is rich, because of blowing up (also known as a monoidal transformation), under which a point is replaced by the curve of all limiting tangent directions coming into it (a projective line). Certain curves may also be blown down, but there is a restriction (self-intersection number must be −1). === Castelnuovo's Theorem === One of the fundamental theorems for the birational geometry of surfaces is Castelnuovo's theorem. This states that any birational map between algebraic surfaces is given by a finite sequence of blowups and blowdowns. == Properties == The Nakai criterion says that: A Divisor D on a surface S is ample if and only if D2 > 0 and for all irreducible curve C on S D•C > 0. Ample divisors have a nice property such as it is the pullback of some hyperplane bundle of projective space, whose properties are very well known. Let D ( S ) {\displaystyle {\mathcal {D}}(S)} be the abelian group consisting of all the divisors on S. Then due to the intersection theorem D ( S ) × D ( S ) → Z : ( X , Y ) ↦ X ⋅ Y {\displaystyle {\mathcal {D}}(S)\times {\mathcal {D}}(S)\rightarrow \mathbb {Z} :(X,Y)\mapsto X\cdot Y} is viewed as a quadratic form. Let D 0 ( S ) := { D ∈ D ( S ) | D ⋅ X = 0 , for all X ∈ D ( S ) } {\displaystyle {\mathcal {D}}_{0}(S):=\{D\in {\mathcal {D}}(S)|D\cdot X=0,{\text{for all }}X\in {\mathcal {D}}(S)\}} then D / D 0 ( S ) := N u m ( S ) {\displaystyle {\mathcal {D}}/{\mathcal {D}}_{0}(S):=Num(S)} becomes to be a numerical equivalent class group of S and N u m ( S ) × N u m ( S ) ↦ Z = ( D ¯ , E ¯ ) ↦ D ⋅ E {\displaystyle Num(S)\times Num(S)\mapsto \mathbb {Z} =({\bar {D}},{\bar {E}})\mapsto D\cdot E} also becomes to be a quadratic form on N u m ( S ) {\displaystyle Num(S)} , where D ¯ {\displaystyle {\bar {D}}} is the image of a divisor D on S. (In the below the image D ¯ {\displaystyle {\bar {D}}} is abbreviated with D.) For an ample line bundle H on S, the definition { H } ⊥ := { D ∈ N u m ( S ) | D ⋅ H = 0 } . {\displaystyle \{H\}^{\perp }:=\{D\in Num(S)|D\cdot H=0\}.} is used in the surface version of the Hodge index theorem: for D ∈ { { H } ⊥ | D ≠ 0 } , D ⋅ D < 0 {\displaystyle D\in \{\{H\}^{\perp }|D\neq 0\},D\cdot D<0} , i.e. the restriction of the intersection form to { H } ⊥ {\displaystyle \{H\}^{\perp }} is a negative definite quadratic form. This theorem is proven using the Nakai criterion and the Riemann-Roch theorem for surfaces. The Hodge index theorem is used in Deligne's proof of the Weil conjecture. Basic results on algebraic surfaces include the Hodge index theorem, and the division into five groups of birational equivalence classes called the classification of algebraic surfaces. The general type class, of Kodaira dimension 2, is very large (degree 5 or larger for a non-singular surface in P3 lies in it, for example). There are essential three Hodge number invariants of a surface. Of those, h1,0 was classically called the irregularity and denoted by q; and h2,0 was called the geometric genus pg. The third, h1,1, is not a birational invariant, because blowing up can add whole curves, with classes in H1,1. It is known that Hodge cycles are algebraic and that algebraic equivalence coincides with homological equivalence, so that h1,1 is an upper bound for ρ, the rank of the Néron-Severi group. The arithmetic genus pa is the difference geometric genus − irregularity. This explains why the irregularity got its name, as a kind of 'error term'. == Riemann-Roch theorem for surfaces == The Riemann-Roch theorem for surfaces was first formulated by Max Noether. The families of curves on surfaces can be classified, in a sense, and give rise to much of their interesting geometry. == References == Dolgachev, I.V. (2001) [1994], "Algebraic surface", Encyclopedia of Mathematics, EMS Press Zariski, Oscar (1995), Algebraic surfaces, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58658-6, MR 1336146 == External links == Free program SURFER to visualize algebraic surfaces in real-time, including a user gallery. SingSurf an interactive 3D viewer for algebraic surfaces. Page on Algebraic Surfaces started in 2008 Overview and thoughts on designing Algebraic surfaces
Wikipedia/Algebraic_surface
A twin prime is a prime number that is either 2 less or 2 more than another prime number—for example, either member of the twin prime pair (17, 19) or (41, 43). In other words, a twin prime is a prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair. Twin primes become increasingly rare as one examines larger ranges, in keeping with the general tendency of gaps between adjacent primes to become larger as the numbers themselves get larger. However, it is unknown whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there is a largest pair. The breakthrough work of Yitang Zhang in 2013, as well as work by James Maynard, Terence Tao and others, has made substantial progress towards proving that there are infinitely many twin primes, but at present this remains unsolved. == Properties == Usually the pair (2, 3) is not considered to be a pair of twin primes. Since 2 is the only even prime, this pair is the only pair of prime numbers that differ by one; thus twin primes are as closely spaced as possible for any other two primes. The first several twin prime pairs are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139), ... OEIS: A077800. Five is the only prime that belongs to two pairs, as every twin prime pair greater than (3, 5) is of the form ( 6 n − 1 , 6 n + 1 ) {\displaystyle (6n-1,6n+1)} for some natural number n; that is, the number between the two primes is a multiple of 6. As a result, the sum of any pair of twin primes (other than 3 and 5) is divisible by 12. === Brun's theorem === In 1915, Viggo Brun showed that the sum of reciprocals of the twin primes was convergent. This famous result, called Brun's theorem, was the first use of the Brun sieve and helped initiate the development of modern sieve theory. The modern version of Brun's argument can be used to show that the number of twin primes less than N does not exceed C N ( log ⁡ N ) 2 {\displaystyle {\frac {CN}{(\log N)^{2}}}} for some absolute constant C > 0. In fact, it is bounded above by 8 C 2 N ( log ⁡ N ) 2 [ 1 + O ⁡ ( log ⁡ log ⁡ N log ⁡ N ) ] , {\displaystyle {\frac {8C_{2}N}{(\log N)^{2}}}\left[1+\operatorname {\mathcal {O}} \left({\frac {\log \log N}{\log N}}\right)\right],} where C 2 {\displaystyle C_{2}} is the twin prime constant (slightly less than 2/3), given below. == Twin prime conjecture == The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states that there are infinitely many primes p such that p + 2 is also prime. In 1849, de Polignac made the more general conjecture that for every natural number k, there are infinitely many primes p such that p + 2k is also prime. The case k = 1 of de Polignac's conjecture is the twin prime conjecture. A stronger form of the twin prime conjecture, the Hardy–Littlewood conjecture, postulates a distribution law for twin primes akin to the prime number theorem. On 17 April 2013, Yitang Zhang announced a proof that there exists an integer N that is less than 70 million, where there are infinitely many pairs of primes that differ by N. Zhang's paper was accepted in early May 2013. Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhang's bound. One year after Zhang's announcement, the bound had been reduced to 246, where it remains. These improved bounds were discovered using a different approach that was simpler than Zhang's and was discovered independently by James Maynard and Terence Tao. This second approach also gave bounds for the smallest f (m) needed to guarantee that infinitely many intervals of width f (m) contain at least m primes. Moreover (see also the next section) assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath Project wiki states that the bound is 12 and 6, respectively. A strengthening of Goldbach’s conjecture, if proved, would also prove there is an infinite number of twin primes, as would the existence of Siegel zeroes. == Other theorems weaker than the twin prime conjecture == In 1940, Paul Erdős showed that there is a constant c < 1 and infinitely many primes p such that p′ − p < c ln p where p′ denotes the next prime after p. What this means is that we can find infinitely many intervals that contain two primes (p, p′) as long as we let these intervals grow slowly in size as we move to bigger and bigger primes. Here, "grow slowly" means that the length of these intervals can grow logarithmically. This result was successively improved; in 1986 Helmut Maier showed that a constant c < 0.25 can be used. In 2004 Daniel Goldston and Cem Yıldırım showed that the constant could be improved further to c = 0.085786... . In 2005, Goldston, Pintz, and Yıldırım established that c can be chosen to be arbitrarily small, i.e. lim inf n → ∞ ( p n + 1 − p n log ⁡ p n ) = 0 . {\displaystyle \liminf _{n\to \infty }\left({\frac {p_{n+1}-p_{n}}{\log p_{n}}}\right)=0~.} On the other hand, this result does not rule out that there may not be infinitely many intervals that contain two primes if we only allow the intervals to grow in size as, for example, c ln ln p . By assuming the Elliott–Halberstam conjecture or a slightly weaker version, they were able to show that there are infinitely many n such that at least two of n, n + 2, n + 6, n + 8, n + 12, n + 18, or n + 20 are prime. Under a stronger hypothesis they showed that for infinitely many n, at least two of n, n + 2, n + 4, and n + 6 are prime. The result of Yitang Zhang, lim inf n → ∞ ( p n + 1 − p n ) < N w i t h N = 7 × 10 7 , {\displaystyle \liminf _{n\to \infty }(p_{n+1}-p_{n})<N~\mathrm {with} ~N=7\times 10^{7},} is a major improvement on the Goldston–Graham–Pintz–Yıldırım result. The Polymath Project optimization of Zhang's bound and the work of Maynard have reduced the bound: the limit inferior is at most 246. == Conjectures == === First Hardy–Littlewood conjecture === The first Hardy–Littlewood conjecture (named after G. H. Hardy and John Littlewood) is a generalization of the twin prime conjecture. It is concerned with the distribution of prime constellations, including twin primes, in analogy to the prime number theorem. Let ⁠ π 2 ( x ) {\displaystyle \pi _{2}(x)} ⁠ denote the number of primes p ≤ x such that p + 2 is also prime. Define the twin prime constant C2 as C 2 = ∏ p p r i m e , p ≥ 3 ( 1 − 1 ( p − 1 ) 2 ) ≈ 0.660161815846869573927812110014 … . {\displaystyle C_{2}=\prod _{\textstyle {p\;\mathrm {prime,} \atop p\geq 3}}\left(1-{\frac {1}{(p-1)^{2}}}\right)\approx 0.660161815846869573927812110014\ldots .} (Here the product extends over all prime numbers p ≥ 3.) Then a special case of the first Hardy-Littlewood conjecture is that π 2 ( x ) ∼ 2 C 2 x ( ln ⁡ x ) 2 ∼ 2 C 2 ∫ 2 x d t ( ln ⁡ t ) 2 {\displaystyle \pi _{2}(x)\sim 2C_{2}{\frac {x}{(\ln x)^{2}}}\sim 2C_{2}\int _{2}^{x}{\mathrm {d} t \over (\ln t)^{2}}} in the sense that the quotient of the two expressions tends to 1 as x approaches infinity. (The second ~ is not part of the conjecture and is proven by integration by parts.) The conjecture can be justified (but not proven) by assuming that ⁠ 1 ln ⁡ t {\displaystyle {\tfrac {1}{\ln t}}} ⁠ describes the density function of the prime distribution. This assumption, which is suggested by the prime number theorem, implies the twin prime conjecture, as shown in the formula for ⁠ π 2 ( x ) {\displaystyle \pi _{2}(x)} ⁠ above. The fully general first Hardy–Littlewood conjecture on prime k-tuples (not given here) implies that the second Hardy–Littlewood conjecture is false. This conjecture has been extended by Dickson's conjecture. === Polignac's conjecture === Polignac's conjecture from 1849 states that for every positive even integer k, there are infinitely many consecutive prime pairs p and p′ such that p′ − p = k (i.e. there are infinitely many prime gaps of size k). The case k = 2 is the twin prime conjecture. The conjecture has not yet been proven or disproven for any specific value of k, but Zhang's result proves that it is true for at least one (currently unknown) value of k. Indeed, if such a k did not exist, then for any positive even natural number N there are at most finitely many n such that p n + 1 − p n = m {\displaystyle p_{n+1}-p_{n}=m} for all m < N and so for n large enough we have p n + 1 − p n > N , {\displaystyle p_{n+1}-p_{n}>N,} which would contradict Zhang's result. == Large twin primes == Beginning in 2007, two distributed computing projects, Twin Prime Search and PrimeGrid, have produced several record-largest twin primes. As of January 2025, the current largest twin prime pair known is 2996863034895 × 21290000 ± 1 , with 388,342 decimal digits. It was discovered in September 2016. There are 808,675,888,577,436 twin prime pairs below 1018. An empirical analysis of all prime pairs up to 4.35 × 1015 shows that if the number of such pairs less than x is f (x) ·x /(log x)2 then f (x) is about 1.7 for small x and decreases towards about 1.3 as x tends to infinity. The limiting value of f (x) is conjectured to equal twice the twin prime constant (OEIS: A114907) (not to be confused with Brun's constant), according to the Hardy–Littlewood conjecture. == Other elementary properties == Every third odd number is divisible by 3, and therefore no three successive odd numbers can be prime unless one of them is 3. Therefore, 5 is the only prime that is part of two twin prime pairs. The lower member of a pair is by definition a Chen prime. If m − 4 or m + 6 is also prime then the three primes are called a prime triplet. It has been proven that the pair (m, m + 2) is a twin prime if and only if 4 ( ( m − 1 ) ! + 1 ) ≡ − m ( mod m ( m + 2 ) ) . {\displaystyle 4((m-1)!+1)\equiv -m{\pmod {m(m+2)}}.} For a twin prime pair of the form (6n − 1, 6n + 1) for some natural number n > 1, n must end in the digit 0, 2, 3, 5, 7, or 8 (OEIS: A002822). If n were to end in 1 or 6, 6n would end in 6, and 6n −1 would be a multiple of 5. This is not prime unless n = 1. Likewise, if n were to end in 4 or 9, 6n would end in 4, and 6n +1 would be a multiple of 5. The same rule applies modulo any prime p ≥ 5: If n ≡ ±6−1 (mod p), then one of the pair will be divisible by p and will not be a twin prime pair unless 6n = p ±1. p = 5 just happens to produce particularly simple patterns in base 10. == Isolated prime == An isolated prime (also known as single prime or non-twin prime) is a prime number p such that neither p − 2 nor p + 2 is prime. In other words, p is not part of a twin prime pair. For example, 23 is an isolated prime, since 21 and 25 are both composite. The first few isolated primes are 2, 23, 37, 47, 53, 67, 79, 83, 89, 97, ... OEIS: A007510. It follows from Brun's theorem that almost all primes are isolated in the sense that the ratio of the number of isolated primes less than a given threshold n and the number of all primes less than n tends to 1 as n tends to infinity. == See also == Cousin prime Prime gap Prime k-tuple Prime quadruplet Prime triplet Sexy prime == References == == Further reading == Sloane, Neil; Plouffe, Simon (1995). The Encyclopedia of Integer Sequences. San Diego, CA: Academic Press. ISBN 0-12-558630-2. == External links == "Twins", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Top-20 Twin Primes at Chris Caldwell's Prime Pages Xavier Gourdon, Pascal Sebah: Introduction to Twin Primes and Brun's Constant "Official press release" of 58711-digit twin prime record Weisstein, Eric W. "Twin Primes". MathWorld. The 20 000 first twin primes Polymath: Bounded gaps between primes Sudden Progress on Prime Number Problem Has Mathematicians Buzzing
Wikipedia/Twin_prime_conjecture
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as φ ( n ) {\displaystyle \varphi (n)} or ϕ ( n ) {\displaystyle \phi (n)} , and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n. For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3) = gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1. Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then φ(mn) = φ(m)φ(n). This function gives the order of the multiplicative group of integers modulo n (the group of units of the ring Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } ). It is also used for defining the RSA encryption system. == History, terminology, and notation == Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter π to denote it: he wrote πD for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D = 1 but is otherwise the same. The now-standard notation φ(A) comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote φA. Thus, it is often called Euler's phi function or simply the phi function. In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's. The cototient of n is defined as n − φ(n). It counts the number of positive integers less than or equal to n that have at least one prime factor in common with n. == Computing Euler's totient function == There are several formulae for computing φ(n). === Euler's product formula === It states φ ( n ) = n ∏ p ∣ n ( 1 − 1 p ) , {\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right),} where the product is over the distinct prime numbers dividing n. An equivalent formulation is φ ( n ) = p 1 k 1 − 1 ( p 1 − 1 ) p 2 k 2 − 1 ( p 2 − 1 ) ⋯ p r k r − 1 ( p r − 1 ) , {\displaystyle \varphi (n)=p_{1}^{k_{1}-1}(p_{1}{-}1)\,p_{2}^{k_{2}-1}(p_{2}{-}1)\cdots p_{r}^{k_{r}-1}(p_{r}{-}1),} where n = p 1 k 1 p 2 k 2 ⋯ p r k r {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}} is the prime factorization of n {\displaystyle n} (that is, p 1 , p 2 , … , p r {\displaystyle p_{1},p_{2},\ldots ,p_{r}} are distinct prime numbers). The proof of these formulae depends on two important facts. ==== Phi is a multiplicative function ==== This means that if gcd(m, n) = 1, then φ(m) φ(n) = φ(mn). Proof outline: Let A, B, C be the sets of positive integers which are coprime to and less than m, n, mn, respectively, so that |A| = φ(m), etc. Then there is a bijection between A × B and C by the Chinese remainder theorem. ==== Value of phi for a prime power argument ==== If p is prime and k ≥ 1, then φ ( p k ) = p k − p k − 1 = p k − 1 ( p − 1 ) = p k ( 1 − 1 p ) . {\displaystyle \varphi \left(p^{k}\right)=p^{k}-p^{k-1}=p^{k-1}(p-1)=p^{k}\left(1-{\tfrac {1}{p}}\right).} Proof: Since p is a prime number, the only possible values of gcd(pk, m) are 1, p, p2, ..., pk, and the only way to have gcd(pk, m) > 1 is if m is a multiple of p, that is, m ∈ {p, 2p, 3p, ..., pk − 1p = pk}, and there are pk − 1 such multiples not greater than pk. Therefore, the other pk − pk − 1 numbers are all relatively prime to pk. ==== Proof of Euler's product formula ==== The fundamental theorem of arithmetic states that if n > 1 there is a unique expression n = p 1 k 1 p 2 k 2 ⋯ p r k r , {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}},} where p1 < p2 < ... < pr are prime numbers and each ki ≥ 1. (The case n = 1 corresponds to the empty product.) Repeatedly using the multiplicative property of φ and the formula for φ(pk) gives φ ( n ) = φ ( p 1 k 1 ) φ ( p 2 k 2 ) ⋯ φ ( p r k r ) = p 1 k 1 ( 1 − 1 p 1 ) p 2 k 2 ( 1 − 1 p 2 ) ⋯ p r k r ( 1 − 1 p r ) = p 1 k 1 p 2 k 2 ⋯ p r k r ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) = n ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) . {\displaystyle {\begin{array}{rcl}\varphi (n)&=&\varphi (p_{1}^{k_{1}})\,\varphi (p_{2}^{k_{2}})\cdots \varphi (p_{r}^{k_{r}})\\[.1em]&=&p_{1}^{k_{1}}\left(1-{\frac {1}{p_{1}}}\right)p_{2}^{k_{2}}\left(1-{\frac {1}{p_{2}}}\right)\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&n\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right).\end{array}}} This gives both versions of Euler's product formula. An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} , excluding the sets of integers divisible by the prime divisors. ==== Example ==== φ ( 20 ) = φ ( 2 2 5 ) = 20 ( 1 − 1 2 ) ( 1 − 1 5 ) = 20 ⋅ 1 2 ⋅ 4 5 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5)=20\,(1-{\tfrac {1}{2}})\,(1-{\tfrac {1}{5}})=20\cdot {\tfrac {1}{2}}\cdot {\tfrac {4}{5}}=8.} In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19. The alternative formula uses only integers: φ ( 20 ) = φ ( 2 2 5 1 ) = 2 2 − 1 ( 2 − 1 ) 5 1 − 1 ( 5 − 1 ) = 2 ⋅ 1 ⋅ 1 ⋅ 4 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5^{1})=2^{2-1}(2{-}1)\,5^{1-1}(5{-}1)=2\cdot 1\cdot 1\cdot 4=8.} === Fourier transform === The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let F { x } [ m ] = ∑ k = 1 n x k ⋅ e − 2 π i m k n {\displaystyle {\mathcal {F}}\{\mathbf {x} \}[m]=\sum \limits _{k=1}^{n}x_{k}\cdot e^{{-2\pi i}{\frac {mk}{n}}}} where xk = gcd(k,n) for k ∈ {1, ..., n}. Then φ ( n ) = F { x } [ 1 ] = ∑ k = 1 n gcd ( k , n ) e − 2 π i k n . {\displaystyle \varphi (n)={\mathcal {F}}\{\mathbf {x} \}[1]=\sum \limits _{k=1}^{n}\gcd(k,n)e^{-2\pi i{\frac {k}{n}}}.} The real part of this formula is φ ( n ) = ∑ k = 1 n gcd ( k , n ) cos ⁡ 2 π k n . {\displaystyle \varphi (n)=\sum \limits _{k=1}^{n}\gcd(k,n)\cos {\tfrac {2\pi k}{n}}.} For example, using cos ⁡ π 5 = 5 + 1 4 {\displaystyle \cos {\tfrac {\pi }{5}}={\tfrac {{\sqrt {5}}+1}{4}}} and cos ⁡ 2 π 5 = 5 − 1 4 {\displaystyle \cos {\tfrac {2\pi }{5}}={\tfrac {{\sqrt {5}}-1}{4}}} : φ ( 10 ) = gcd ( 1 , 10 ) cos ⁡ 2 π 10 + gcd ( 2 , 10 ) cos ⁡ 4 π 10 + gcd ( 3 , 10 ) cos ⁡ 6 π 10 + ⋯ + gcd ( 10 , 10 ) cos ⁡ 20 π 10 = 1 ⋅ ( 5 + 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( − 5 + 1 4 ) + 5 ⋅ ( − 1 ) + 2 ⋅ ( − 5 + 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( 5 + 1 4 ) + 10 ⋅ ( 1 ) = 4. {\displaystyle {\begin{array}{rcl}\varphi (10)&=&\gcd(1,10)\cos {\tfrac {2\pi }{10}}+\gcd(2,10)\cos {\tfrac {4\pi }{10}}+\gcd(3,10)\cos {\tfrac {6\pi }{10}}+\cdots +\gcd(10,10)\cos {\tfrac {20\pi }{10}}\\&=&1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+5\cdot (-1)\\&&+\ 2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+10\cdot (1)\\&=&4.\end{array}}} Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of n. However, it does involve the calculation of the greatest common divisor of n and every positive integer less than n, which suffices to provide the factorization anyway. === Divisor sum === The property established by Gauss, that ∑ d ∣ n φ ( d ) = n , {\displaystyle \sum _{d\mid n}\varphi (d)=n,} where the sum is over all positive divisors d of n, can be proven in several ways. (See Arithmetical function for notational conventions.) One proof is to note that φ(d) is also equal to the number of possible generators of the cyclic group Cd ; specifically, if Cd = ⟨g⟩ with gd = 1, then gk is a generator for every k coprime to d. Since every element of Cn generates a cyclic subgroup, and each subgroup Cd ⊆ Cn is generated by precisely φ(d) elements of Cn, the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity. The formula can also be derived from elementary arithmetic. For example, let n = 20 and consider the positive fractions up to 1 with denominator 20: 1 20 , 2 20 , 3 20 , 4 20 , 5 20 , 6 20 , 7 20 , 8 20 , 9 20 , 10 20 , 11 20 , 12 20 , 13 20 , 14 20 , 15 20 , 16 20 , 17 20 , 18 20 , 19 20 , 20 20 . {\displaystyle {\tfrac {1}{20}},\,{\tfrac {2}{20}},\,{\tfrac {3}{20}},\,{\tfrac {4}{20}},\,{\tfrac {5}{20}},\,{\tfrac {6}{20}},\,{\tfrac {7}{20}},\,{\tfrac {8}{20}},\,{\tfrac {9}{20}},\,{\tfrac {10}{20}},\,{\tfrac {11}{20}},\,{\tfrac {12}{20}},\,{\tfrac {13}{20}},\,{\tfrac {14}{20}},\,{\tfrac {15}{20}},\,{\tfrac {16}{20}},\,{\tfrac {17}{20}},\,{\tfrac {18}{20}},\,{\tfrac {19}{20}},\,{\tfrac {20}{20}}.} Put them into lowest terms: 1 20 , 1 10 , 3 20 , 1 5 , 1 4 , 3 10 , 7 20 , 2 5 , 9 20 , 1 2 , 11 20 , 3 5 , 13 20 , 7 10 , 3 4 , 4 5 , 17 20 , 9 10 , 19 20 , 1 1 {\displaystyle {\tfrac {1}{20}},\,{\tfrac {1}{10}},\,{\tfrac {3}{20}},\,{\tfrac {1}{5}},\,{\tfrac {1}{4}},\,{\tfrac {3}{10}},\,{\tfrac {7}{20}},\,{\tfrac {2}{5}},\,{\tfrac {9}{20}},\,{\tfrac {1}{2}},\,{\tfrac {11}{20}},\,{\tfrac {3}{5}},\,{\tfrac {13}{20}},\,{\tfrac {7}{10}},\,{\tfrac {3}{4}},\,{\tfrac {4}{5}},\,{\tfrac {17}{20}},\,{\tfrac {9}{10}},\,{\tfrac {19}{20}},\,{\tfrac {1}{1}}} These twenty fractions are all the positive ⁠k/d⁠ ≤ 1 whose denominators are the divisors d = 1, 2, 4, 5, 10, 20. The fractions with 20 as denominator are those with numerators relatively prime to 20, namely ⁠1/20⁠, ⁠3/20⁠, ⁠7/20⁠, ⁠9/20⁠, ⁠11/20⁠, ⁠13/20⁠, ⁠17/20⁠, ⁠19/20⁠; by definition this is φ(20) fractions. Similarly, there are φ(10) fractions with denominator 10, and φ(5) fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size φ(d) for each d dividing 20. A similar argument applies for any n. Möbius inversion applied to the divisor sum formula gives φ ( n ) = ∑ d ∣ n μ ( d ) ⋅ n d = n ∑ d ∣ n μ ( d ) d , {\displaystyle \varphi (n)=\sum _{d\mid n}\mu \left(d\right)\cdot {\frac {n}{d}}=n\sum _{d\mid n}{\frac {\mu (d)}{d}},} where μ is the Möbius function, the multiplicative function defined by μ ( p ) = − 1 {\displaystyle \mu (p)=-1} and μ ( p k ) = 0 {\displaystyle \mu (p^{k})=0} for each prime p and k ≥ 2. This formula may also be derived from the product formula by multiplying out ∏ p ∣ n ( 1 − 1 p ) {\textstyle \prod _{p\mid n}(1-{\frac {1}{p}})} to get ∑ d ∣ n μ ( d ) d . {\textstyle \sum _{d\mid n}{\frac {\mu (d)}{d}}.} An example: φ ( 20 ) = μ ( 1 ) ⋅ 20 + μ ( 2 ) ⋅ 10 + μ ( 4 ) ⋅ 5 + μ ( 5 ) ⋅ 4 + μ ( 10 ) ⋅ 2 + μ ( 20 ) ⋅ 1 = 1 ⋅ 20 − 1 ⋅ 10 + 0 ⋅ 5 − 1 ⋅ 4 + 1 ⋅ 2 + 0 ⋅ 1 = 8. {\displaystyle {\begin{aligned}\varphi (20)&=\mu (1)\cdot 20+\mu (2)\cdot 10+\mu (4)\cdot 5+\mu (5)\cdot 4+\mu (10)\cdot 2+\mu (20)\cdot 1\\[.5em]&=1\cdot 20-1\cdot 10+0\cdot 5-1\cdot 4+1\cdot 2+0\cdot 1=8.\end{aligned}}} == Some values == The first 100 values (sequence A000010 in the OEIS) are shown in the table and graph below: In the graph at right the top line y = n − 1 is an upper bound valid for all n other than one, and attained if and only if n is a prime number. A simple lower bound is φ ( n ) ≥ n / 2 {\displaystyle \varphi (n)\geq {\sqrt {n/2}}} , which is rather loose: in fact, the lower limit of the graph is proportional to ⁠n/log log n⁠. == Euler's theorem == This states that if a and n are relatively prime then a φ ( n ) ≡ 1 mod n . {\displaystyle a^{\varphi (n)}\equiv 1\mod n.} The special case where n is prime is known as Fermat's little theorem. This follows from Lagrange's theorem and the fact that φ(n) is the order of the multiplicative group of integers modulo n. The RSA cryptosystem is based on this theorem: it implies that the inverse of the function a ↦ ae mod n, where e is the (public) encryption exponent, is the function b ↦ bd mod n, where d, the (private) decryption exponent, is the multiplicative inverse of e modulo φ(n). The difficulty of computing φ(n) without knowing the factorization of n is thus the difficulty of computing d: this is known as the RSA problem which can be solved by factoring n. The owner of the private key knows the factorization, since an RSA private key is constructed by choosing n as the product of two (randomly chosen) large primes p and q. Only n is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization. == Other formulae == a ∣ b ⟹ φ ( a ) ∣ φ ( b ) {\displaystyle a\mid b\implies \varphi (a)\mid \varphi (b)} m ∣ φ ( a m − 1 ) {\displaystyle m\mid \varphi (a^{m}-1)} φ ( m n ) = φ ( m ) φ ( n ) ⋅ d φ ( d ) where d = gcd ⁡ ( m , n ) {\displaystyle \varphi (mn)=\varphi (m)\varphi (n)\cdot {\frac {d}{\varphi (d)}}\quad {\text{where }}d=\operatorname {gcd} (m,n)} In particular: φ ( 2 m ) = { 2 φ ( m ) if m is even φ ( m ) if m is odd {\displaystyle \varphi (2m)={\begin{cases}2\varphi (m)&{\text{ if }}m{\text{ is even}}\\\varphi (m)&{\text{ if }}m{\text{ is odd}}\end{cases}}} φ ( n m ) = n m − 1 φ ( n ) {\displaystyle \varphi \left(n^{m}\right)=n^{m-1}\varphi (n)} φ ( lcm ⁡ ( m , n ) ) ⋅ φ ( gcd ⁡ ( m , n ) ) = φ ( m ) ⋅ φ ( n ) {\displaystyle \varphi (\operatorname {lcm} (m,n))\cdot \varphi (\operatorname {gcd} (m,n))=\varphi (m)\cdot \varphi (n)} Compare this to the formula lcm ⁡ ( m , n ) ⋅ gcd ⁡ ( m , n ) = m ⋅ n {\textstyle \operatorname {lcm} (m,n)\cdot \operatorname {gcd} (m,n)=m\cdot n} (see least common multiple). φ(n) is even for n ≥ 3. Moreover, if n has r distinct odd prime factors, 2r | φ(n) For any a > 1 and n > 6 such that 4 ∤ n there exists an l ≥ 2n such that l | φ(an − 1). φ ( n ) n = φ ( rad ⁡ ( n ) ) rad ⁡ ( n ) {\displaystyle {\frac {\varphi (n)}{n}}={\frac {\varphi (\operatorname {rad} (n))}{\operatorname {rad} (n)}}} where rad(n) is the radical of n (the product of all distinct primes dividing n). ∑ d ∣ n μ 2 ( d ) φ ( d ) = n φ ( n ) {\displaystyle \sum _{d\mid n}{\frac {\mu ^{2}(d)}{\varphi (d)}}={\frac {n}{\varphi (n)}}} ∑ 1 ≤ k ≤ n − 1 g c d ( k , n ) = 1 k = 1 2 n φ ( n ) for n > 1 {\displaystyle \sum _{1\leq k\leq n-1 \atop gcd(k,n)=1}\!\!k={\tfrac {1}{2}}n\varphi (n)\quad {\text{for }}n>1} ∑ k = 1 n φ ( k ) = 1 2 ( 1 + ∑ k = 1 n μ ( k ) ⌊ n k ⌋ 2 ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\tfrac {1}{2}}\left(1+\sum _{k=1}^{n}\mu (k)\left\lfloor {\frac {n}{k}}\right\rfloor ^{2}\right)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ( cited in) ∑ k = 1 n φ ( k ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} [Liu (2016)] ∑ k = 1 n φ ( k ) k = ∑ k = 1 n μ ( k ) k ⌊ n k ⌋ = 6 π 2 n + O ( ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {\varphi (k)}{k}}=\sum _{k=1}^{n}{\frac {\mu (k)}{k}}\left\lfloor {\frac {n}{k}}\right\rfloor ={\frac {6}{\pi ^{2}}}n+O\left((\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ∑ k = 1 n k φ ( k ) = 315 ζ ( 3 ) 2 π 4 n − log ⁡ n 2 + O ( ( log ⁡ n ) 2 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {k}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}n-{\frac {\log n}{2}}+O\left((\log n)^{\frac {2}{3}}\right)} ∑ k = 1 n 1 φ ( k ) = 315 ζ ( 3 ) 2 π 4 ( log ⁡ n + γ − ∑ p prime log ⁡ p p 2 − p + 1 ) + O ( ( log ⁡ n ) 2 3 n ) {\displaystyle \sum _{k=1}^{n}{\frac {1}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}\left(\log n+\gamma -\sum _{p{\text{ prime}}}{\frac {\log p}{p^{2}-p+1}}\right)+O\left({\frac {(\log n)^{\frac {2}{3}}}{n}}\right)} (where γ is the Euler–Mascheroni constant). === Menon's identity === In 1965 P. Kesava Menon proved ∑ gcd ( k , n ) = 1 1 ≤ k ≤ n gcd ( k − 1 , n ) = φ ( n ) d ( n ) , {\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\!\!\!\!\gcd(k-1,n)=\varphi (n)d(n),} where d(n) = σ0(n) is the number of divisors of n. === Divisibility by any fixed positive integer === The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result: see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values of φ ( n ) {\displaystyle \varphi (n)} in the arithmetic progressions modulo q {\displaystyle q} for any integer q > 1 {\displaystyle q>1} . For every fixed positive integer q {\displaystyle q} , the relation q | φ ( n ) {\displaystyle q|\varphi (n)} holds for almost all n {\displaystyle n} , meaning for all but o ( x ) {\displaystyle o(x)} values of n ≤ x {\displaystyle n\leq x} as x → ∞ {\displaystyle x\rightarrow \infty } . This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 modulo q {\displaystyle q} diverges, which itself is a corollary of the proof of Dirichlet's theorem on arithmetic progressions. == Generating functions == The Dirichlet series for φ(n) may be written in terms of the Riemann zeta function as: ∑ n = 1 ∞ φ ( n ) n s = ζ ( s − 1 ) ζ ( s ) {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)}{n^{s}}}={\frac {\zeta (s-1)}{\zeta (s)}}} where the left-hand side converges for ℜ ( s ) > 2 {\displaystyle \Re (s)>2} . The Lambert series generating function is ∑ n = 1 ∞ φ ( n ) q n 1 − q n = q ( 1 − q ) 2 {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)q^{n}}{1-q^{n}}}={\frac {q}{(1-q)^{2}}}} which converges for |q| < 1. Both of these are proved by elementary series manipulations and the formulae for φ(n). == Growth rate == In the words of Hardy & Wright, the order of φ(n) is "always 'nearly n'." First lim sup φ ( n ) n = 1 , {\displaystyle \lim \sup {\frac {\varphi (n)}{n}}=1,} but as n goes to infinity, for all δ > 0 φ ( n ) n 1 − δ → ∞ . {\displaystyle {\frac {\varphi (n)}{n^{1-\delta }}}\rightarrow \infty .} These two formulae can be proved by using little more than the formulae for φ(n) and the divisor sum function σ(n). In fact, during the proof of the second formula, the inequality 6 π 2 < φ ( n ) σ ( n ) n 2 < 1 , {\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\varphi (n)\sigma (n)}{n^{2}}}<1,} true for n > 1, is proved. We also have lim inf φ ( n ) n log ⁡ log ⁡ n = e − γ . {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}\log \log n=e^{-\gamma }.} Here γ is Euler's constant, γ = 0.577215665..., so eγ = 1.7810724... and e−γ = 0.56145948.... Proving this does not quite require the prime number theorem. Since log log n goes to infinity, this formula shows that lim inf φ ( n ) n = 0. {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}=0.} In fact, more is true. φ ( n ) > n e γ log ⁡ log ⁡ n + 3 log ⁡ log ⁡ n for n > 2 {\displaystyle \varphi (n)>{\frac {n}{e^{\gamma }\;\log \log n+{\frac {3}{\log \log n}}}}\quad {\text{for }}n>2} and φ ( n ) < n e γ log ⁡ log ⁡ n for infinitely many n . {\displaystyle \varphi (n)<{\frac {n}{e^{\gamma }\log \log n}}\quad {\text{for infinitely many }}n.} The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption.": 173  For the average order, we have φ ( 1 ) + φ ( 2 ) + ⋯ + φ ( n ) = 3 n 2 π 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) as n → ∞ , {\displaystyle \varphi (1)+\varphi (2)+\cdots +\varphi (n)={\frac {3n^{2}}{\pi ^{2}}}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)\quad {\text{as }}n\rightarrow \infty ,} due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov. By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775) improved the error term to O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} (this is currently the best known estimate of this type). The "Big O" stands for a quantity that is bounded by a constant times the function of n inside the parentheses (which is small compared to n2). This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is ⁠6/π2⁠. == Ratio of consecutive values == In 1950 Somayajulu proved lim inf φ ( n + 1 ) φ ( n ) = 0 and lim sup φ ( n + 1 ) φ ( n ) = ∞ . {\displaystyle {\begin{aligned}\lim \inf {\frac {\varphi (n+1)}{\varphi (n)}}&=0\quad {\text{and}}\\[5px]\lim \sup {\frac {\varphi (n+1)}{\varphi (n)}}&=\infty .\end{aligned}}} In 1954 Schinzel and Sierpiński strengthened this, proving that the set { φ ( n + 1 ) φ ( n ) , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n+1)}{\varphi (n)}},\;\;n=1,2,\ldots \right\}} is dense in the positive real numbers. They also proved that the set { φ ( n ) n , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n)}{n}},\;\;n=1,2,\ldots \right\}} is dense in the interval (0,1). == Totient number == A totient number is a value of Euler's totient function: that is, an m for which there is at least one n for which φ(n) = m. The valency or multiplicity of a totient number m is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient. The number of totient numbers up to a given limit x is x log ⁡ x e ( C + o ( 1 ) ) ( log ⁡ log ⁡ log ⁡ x ) 2 {\displaystyle {\frac {x}{\log x}}e^{{\big (}C+o(1){\big )}(\log \log \log x)^{2}}} for a constant C = 0.8178146.... If counted accordingly to multiplicity, the number of totient numbers up to a given limit x is | { n : φ ( n ) ≤ x } | = ζ ( 2 ) ζ ( 3 ) ζ ( 6 ) ⋅ x + R ( x ) {\displaystyle {\Big \vert }\{n:\varphi (n)\leq x\}{\Big \vert }={\frac {\zeta (2)\zeta (3)}{\zeta (6)}}\cdot x+R(x)} where the error term R is of order at most ⁠x/(log x)k⁠ for any positive k. It is known that the multiplicity of m exceeds mδ infinitely often for any δ < 0.55655. === Ford's theorem === Ford (1999) proved that for every integer k ≥ 2 there is a totient number m of multiplicity k: that is, for which the equation φ(n) = m has exactly k solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often. However, no number m is known with multiplicity k = 1. Carmichael's totient function conjecture is the statement that there is no such m. === Perfect totient numbers === A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number. == Applications == === Cyclotomy === In the last section of the Disquisitiones Gauss proves that a regular n-gon can be constructed with straightedge and compass if φ(n) is a power of 2. If n is a power of an odd prime number the formula for the totient says its totient can be a power of two only if n is a first power and n − 1 is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more. Thus, a regular n-gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such n are 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... (sequence A003401 in the OEIS). === Prime number theorem for arithmetic progressions === === The RSA cryptosystem === Setting up an RSA system involves choosing large prime numbers p and q, computing n = pq and k = φ(n), and finding two numbers e and d such that ed ≡ 1 (mod k). The numbers n and e (the "encryption key") are released to the public, and d (the "decryption key") is kept private. A message, represented by an integer m, where 0 < m < n, is encrypted by computing S = me (mod n). It is decrypted by computing t = Sd (mod n). Euler's Theorem can be used to show that if 0 < t < n, then t = m. The security of an RSA system would be compromised if the number n could be efficiently factored or if φ(n) could be efficiently computed without factoring n. == Unsolved problems == === Lehmer's conjecture === If p is prime, then φ(p) = p − 1. In 1932 D. H. Lehmer asked if there are any composite numbers n such that φ(n) divides n − 1. None are known. In 1933 he proved that if any such n exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ω(n) ≥ 7). In 1980 Cohen and Hagis proved that n > 1020 and that ω(n) ≥ 14. Further, Hagis showed that if 3 divides n then n > 101937042 and ω(n) ≥ 298848. === Carmichael's conjecture === This states that there is no number n with the property that for all other numbers m, m ≠ n, φ(m) ≠ φ(n). See Ford's theorem above. As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10. === Riemann hypothesis === The Riemann hypothesis is true if and only if the inequality n φ ( n ) < e γ log ⁡ log ⁡ n + e γ ( 4 + γ − log ⁡ 4 π ) log ⁡ n {\displaystyle {\frac {n}{\varphi (n)}}<e^{\gamma }\log \log n+{\frac {e^{\gamma }(4+\gamma -\log 4\pi )}{\sqrt {\log n}}}} is true for all n ≥ p120569# where γ is Euler's constant and p120569# is the product of the first 120569 primes. == See also == Carmichael function (λ) Dedekind psi function (𝜓) Divisor function (σ) Duffin–Schaeffer conjecture Generalizations of Fermat's little theorem Highly composite number Multiplicative group of integers modulo n Ramanujan sum Totient summatory function (𝛷) == Notes == == References == == External links == "Totient function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Euler's Phi Function and the Chinese Remainder Theorem — proof that φ(n) is multiplicative Archived 2021-02-28 at the Wayback Machine Euler's totient function calculator in JavaScript — up to 20 digits Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine Plytage, Loomis, Polhill Summing Up The Euler Phi Function
Wikipedia/Euler's_totient_function
In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number x. It is denoted by π(x) (unrelated to the number π). A symmetric variant seen sometimes is π0(x), which is equal to π(x) − 1⁄2 if x is exactly a prime number, and equal to π(x) otherwise. That is, the number of prime numbers less than x, plus half if x equals a prime. == Growth rate == Of great interest in number theory is the growth rate of the prime-counting function. It was conjectured in the end of the 18th century by Gauss and by Legendre to be approximately x log ⁡ x {\displaystyle {\frac {x}{\log x}}} where log is the natural logarithm, in the sense that lim x → ∞ π ( x ) x / log ⁡ x = 1. {\displaystyle \lim _{x\rightarrow \infty }{\frac {\pi (x)}{x/\log x}}=1.} This statement is the prime number theorem. An equivalent statement is lim x → ∞ π ( x ) li ⁡ ( x ) = 1 {\displaystyle \lim _{x\rightarrow \infty }{\frac {\pi (x)}{\operatorname {li} (x)}}=1} where li is the logarithmic integral function. The prime number theorem was first proved in 1896 by Jacques Hadamard and by Charles de la Vallée Poussin independently, using properties of the Riemann zeta function introduced by Riemann in 1859. Proofs of the prime number theorem not using the zeta function or complex analysis were found around 1948 by Atle Selberg and by Paul Erdős (for the most part independently). === More precise estimates === In 1899, de la Vallée Poussin proved that π ( x ) = li ⁡ ( x ) + O ( x e − a log ⁡ x ) as x → ∞ {\displaystyle \pi (x)=\operatorname {li} (x)+O\left(xe^{-a{\sqrt {\log x}}}\right)\quad {\text{as }}x\to \infty } for some positive constant a. Here, O(...) is the big O notation. More precise estimates of π(x) are now known. For example, in 2002, Kevin Ford proved that π ( x ) = li ⁡ ( x ) + O ( x exp ⁡ ( − 0.2098 ( log ⁡ x ) 3 / 5 ( log ⁡ log ⁡ x ) − 1 / 5 ) ) . {\displaystyle \pi (x)=\operatorname {li} (x)+O\left(x\exp \left(-0.2098(\log x)^{3/5}(\log \log x)^{-1/5}\right)\right).} Mossinghoff and Trudgian proved an explicit upper bound for the difference between π(x) and li(x): | π ( x ) − li ⁡ ( x ) | ≤ 0.2593 x ( log ⁡ x ) 3 / 4 exp ⁡ ( − log ⁡ x 6.315 ) for x ≥ 229. {\displaystyle {\bigl |}\pi (x)-\operatorname {li} (x){\bigr |}\leq 0.2593{\frac {x}{(\log x)^{3/4}}}\exp \left(-{\sqrt {\frac {\log x}{6.315}}}\right)\quad {\text{for }}x\geq 229.} For values of x that are not unreasonably large, li(x) is greater than π(x). However, π(x) − li(x) is known to change sign infinitely many times. For a discussion of this, see Skewes' number. === Exact form === For x > 1 let π0(x) = π(x) − ⁠1/2⁠ when x is a prime number, and π0(x) = π(x) otherwise. Bernhard Riemann, in his work On the Number of Primes Less Than a Given Magnitude, proved that π0(x) is equal to π 0 ( x ) = R ⁡ ( x ) − ∑ ρ R ⁡ ( x ρ ) , {\displaystyle \pi _{0}(x)=\operatorname {R} (x)-\sum _{\rho }\operatorname {R} (x^{\rho }),} where R ⁡ ( x ) = ∑ n = 1 ∞ μ ( n ) n li ⁡ ( x 1 / n ) , {\displaystyle \operatorname {R} (x)=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n}}\operatorname {li} \left(x^{1/n}\right),} μ(n) is the Möbius function, li(x) is the logarithmic integral function, ρ indexes every zero of the Riemann zeta function, and li(x⁠ρ/n⁠) is not evaluated with a branch cut but instead considered as Ei(⁠ρ/n⁠ log x) where Ei(x) is the exponential integral. If the trivial zeros are collected and the sum is taken only over the non-trivial zeros ρ of the Riemann zeta function, then π0(x) may be approximated by π 0 ( x ) ≈ R ⁡ ( x ) − ∑ ρ R ⁡ ( x ρ ) − 1 log ⁡ x + 1 π arctan ⁡ π log ⁡ x . {\displaystyle \pi _{0}(x)\approx \operatorname {R} (x)-\sum _{\rho }\operatorname {R} \left(x^{\rho }\right)-{\frac {1}{\log x}}+{\frac {1}{\pi }}\arctan {\frac {\pi }{\log x}}.} The Riemann hypothesis suggests that every such non-trivial zero lies along Re(s) = ⁠1/2⁠. == Table of π(x), ⁠x/log x ⁠, and li(x) == The table shows how the three functions π(x), ⁠x/log x⁠, and li(x) compared at powers of 10. See also, and In the On-Line Encyclopedia of Integer Sequences, the π(x) column is sequence OEIS: A006880, π(x) − ⁠x/log x⁠ is sequence OEIS: A057835, and li(x) − π(x) is sequence OEIS: A057752. The value for π(1024) was originally computed by J. Buethe, J. Franke, A. Jost, and T. Kleinjung assuming the Riemann hypothesis. It was later verified unconditionally in a computation by D. J. Platt. The value for π(1025) is by the same four authors. The value for π(1026) was computed by D. B. Staple. All other prior entries in this table were also verified as part of that work. The values for 1027, 1028, and 1029 were announced by David Baugh and Kim Walisch in 2015, 2020, and 2022, respectively. == Algorithms for evaluating π(x) == A simple way to find π(x), if x is not too large, is to use the sieve of Eratosthenes to produce the primes less than or equal to x and then to count them. A more elaborate way of finding π(x) is due to Legendre (using the inclusion–exclusion principle): given x, if p1, p2,…, pn are distinct prime numbers, then the number of integers less than or equal to x which are divisible by no pi is ⌊ x ⌋ − ∑ i ⌊ x p i ⌋ + ∑ i < j ⌊ x p i p j ⌋ − ∑ i < j < k ⌊ x p i p j p k ⌋ + ⋯ {\displaystyle \lfloor x\rfloor -\sum _{i}\left\lfloor {\frac {x}{p_{i}}}\right\rfloor +\sum _{i<j}\left\lfloor {\frac {x}{p_{i}p_{j}}}\right\rfloor -\sum _{i<j<k}\left\lfloor {\frac {x}{p_{i}p_{j}p_{k}}}\right\rfloor +\cdots } (where ⌊x⌋ denotes the floor function). This number is therefore equal to π ( x ) − π ( x ) + 1 {\displaystyle \pi (x)-\pi \left({\sqrt {x}}\right)+1} when the numbers p1, p2,…, pn are the prime numbers less than or equal to the square root of x. === The Meissel–Lehmer algorithm === In a series of articles published between 1870 and 1885, Ernst Meissel described (and used) a practical combinatorial way of evaluating π(x): Let p1, p2,…, pn be the first n primes and denote by Φ(m,n) the number of natural numbers not greater than m which are divisible by none of the pi for any i ≤ n. Then Φ ( m , n ) = Φ ( m , n − 1 ) − Φ ( m p n , n − 1 ) . {\displaystyle \Phi (m,n)=\Phi (m,n-1)-\Phi \left({\frac {m}{p_{n}}},n-1\right).} Given a natural number m, if n = π(3√m) and if μ = π(√m) − n, then π ( m ) = Φ ( m , n ) + n ( μ + 1 ) + μ 2 − μ 2 − 1 − ∑ k = 1 μ π ( m p n + k ) . {\displaystyle \pi (m)=\Phi (m,n)+n(\mu +1)+{\frac {\mu ^{2}-\mu }{2}}-1-\sum _{k=1}^{\mu }\pi \left({\frac {m}{p_{n+k}}}\right).} Using this approach, Meissel computed π(x), for x equal to 5×105, 106, 107, and 108. In 1959, Derrick Henry Lehmer extended and simplified Meissel's method. Define, for real m and for natural numbers n and k, Pk(m,n) as the number of numbers not greater than m with exactly k prime factors, all greater than pn. Furthermore, set P0(m,n) = 1. Then Φ ( m , n ) = ∑ k = 0 + ∞ P k ( m , n ) {\displaystyle \Phi (m,n)=\sum _{k=0}^{+\infty }P_{k}(m,n)} where the sum actually has only finitely many nonzero terms. Let y denote an integer such that 3√m ≤ y ≤ √m, and set n = π(y). Then P1(m,n) = π(m) − n and Pk(m,n) = 0 when k ≥ 3. Therefore, π ( m ) = Φ ( m , n ) + n − 1 − P 2 ( m , n ) {\displaystyle \pi (m)=\Phi (m,n)+n-1-P_{2}(m,n)} The computation of P2(m,n) can be obtained this way: P 2 ( m , n ) = ∑ y < p ≤ m ( π ( m p ) − π ( p ) + 1 ) {\displaystyle P_{2}(m,n)=\sum _{y<p\leq {\sqrt {m}}}\left(\pi \left({\frac {m}{p}}\right)-\pi (p)+1\right)} where the sum is over prime numbers. On the other hand, the computation of Φ(m,n) can be done using the following rules: Φ ( m , 0 ) = ⌊ m ⌋ {\displaystyle \Phi (m,0)=\lfloor m\rfloor } Φ ( m , b ) = Φ ( m , b − 1 ) − Φ ( m p b , b − 1 ) {\displaystyle \Phi (m,b)=\Phi (m,b-1)-\Phi \left({\frac {m}{p_{b}}},b-1\right)} Using his method and an IBM 701, Lehmer was able to compute the correct value of π(109) and missed the correct value of π(1010) by 1. Further improvements to this method were made by Lagarias, Miller, Odlyzko, Deléglise, and Rivat. == Other prime-counting functions == Other prime-counting functions are also used because they are more convenient to work with. === Riemann's prime-power counting function === Riemann's prime-power counting function is usually denoted as Π0(x) or J0(x). It has jumps of ⁠1/n⁠ at prime powers pn and it takes a value halfway between the two sides at the discontinuities of π(x). That added detail is used because the function may then be defined by an inverse Mellin transform. Formally, we may define Π0(x) by Π 0 ( x ) = 1 2 ( ∑ p n < x 1 n + ∑ p n ≤ x 1 n ) {\displaystyle \Pi _{0}(x)={\frac {1}{2}}\left(\sum _{p^{n}<x}{\frac {1}{n}}+\sum _{p^{n}\leq x}{\frac {1}{n}}\right)\ } where the variable p in each sum ranges over all primes within the specified limits. We may also write Π 0 ( x ) = ∑ n = 2 x Λ ( n ) log ⁡ n − Λ ( x ) 2 log ⁡ x = ∑ n = 1 ∞ 1 n π 0 ( x 1 / n ) {\displaystyle \ \Pi _{0}(x)=\sum _{n=2}^{x}{\frac {\Lambda (n)}{\log n}}-{\frac {\Lambda (x)}{2\log x}}=\sum _{n=1}^{\infty }{\frac {1}{n}}\pi _{0}\left(x^{1/n}\right)} where Λ is the von Mangoldt function and π 0 ( x ) = lim ε → 0 π ( x − ε ) + π ( x + ε ) 2 . {\displaystyle \pi _{0}(x)=\lim _{\varepsilon \to 0}{\frac {\pi (x-\varepsilon )+\pi (x+\varepsilon )}{2}}.} The Möbius inversion formula then gives π 0 ( x ) = ∑ n = 1 ∞ μ ( n ) n Π 0 ( x 1 / n ) , {\displaystyle \pi _{0}(x)=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n}}\ \Pi _{0}\left(x^{1/n}\right),} where μ(n) is the Möbius function. Knowing the relationship between the logarithm of the Riemann zeta function and the von Mangoldt function Λ, and using the Perron formula we have log ⁡ ζ ( s ) = s ∫ 0 ∞ Π 0 ( x ) x − s − 1 d x {\displaystyle \log \zeta (s)=s\int _{0}^{\infty }\Pi _{0}(x)x^{-s-1}\,\mathrm {d} x} === Chebyshev's function === The Chebyshev function weights primes or prime powers pn by log p: ϑ ( x ) = ∑ p ≤ x log ⁡ p ψ ( x ) = ∑ p n ≤ x log ⁡ p = ∑ n = 1 ∞ ϑ ( x 1 / n ) = ∑ n ≤ x Λ ( n ) . {\displaystyle {\begin{aligned}\vartheta (x)&=\sum _{p\leq x}\log p\\\psi (x)&=\sum _{p^{n}\leq x}\log p=\sum _{n=1}^{\infty }\vartheta \left(x^{1/n}\right)=\sum _{n\leq x}\Lambda (n).\end{aligned}}} For x ≥ 2, ϑ ( x ) = π ( x ) log ⁡ x − ∫ 2 x π ( t ) t d t {\displaystyle \vartheta (x)=\pi (x)\log x-\int _{2}^{x}{\frac {\pi (t)}{t}}\,\mathrm {d} t} and π ( x ) = ϑ ( x ) log ⁡ x + ∫ 2 x ϑ ( t ) t log 2 ⁡ ( t ) d t . {\displaystyle \pi (x)={\frac {\vartheta (x)}{\log x}}+\int _{2}^{x}{\frac {\vartheta (t)}{t\log ^{2}(t)}}\mathrm {d} t.} == Formulas for prime-counting functions == Formulas for prime-counting functions come in two kinds: arithmetic formulas and analytic formulas. Analytic formulas for prime-counting were the first used to prove the prime number theorem. They stem from the work of Riemann and von Mangoldt, and are generally known as explicit formulae. We have the following expression for the second Chebyshev function ψ: ψ 0 ( x ) = x − ∑ ρ x ρ ρ − log ⁡ 2 π − 1 2 log ⁡ ( 1 − x − 2 ) , {\displaystyle \psi _{0}(x)=x-\sum _{\rho }{\frac {x^{\rho }}{\rho }}-\log 2\pi -{\frac {1}{2}}\log \left(1-x^{-2}\right),} where ψ 0 ( x ) = lim ε → 0 ψ ( x − ε ) + ψ ( x + ε ) 2 . {\displaystyle \psi _{0}(x)=\lim _{\varepsilon \to 0}{\frac {\psi (x-\varepsilon )+\psi (x+\varepsilon )}{2}}.} Here ρ are the zeros of the Riemann zeta function in the critical strip, where the real part of ρ is between zero and one. The formula is valid for values of x greater than one, which is the region of interest. The sum over the roots is conditionally convergent, and should be taken in order of increasing absolute value of the imaginary part. Note that the same sum over the trivial roots gives the last subtrahend in the formula. For Π0(x) we have a more complicated formula Π 0 ( x ) = li ⁡ ( x ) − ∑ ρ li ⁡ ( x ρ ) − log ⁡ 2 + ∫ x ∞ d t t ( t 2 − 1 ) log ⁡ t . {\displaystyle \Pi _{0}(x)=\operatorname {li} (x)-\sum _{\rho }\operatorname {li} \left(x^{\rho }\right)-\log 2+\int _{x}^{\infty }{\frac {\mathrm {d} t}{t\left(t^{2}-1\right)\log t}}.} Again, the formula is valid for x > 1, while ρ are the nontrivial zeros of the zeta function ordered according to their absolute value. The first term li(x) is the usual logarithmic integral function; the expression li(xρ) in the second term should be considered as Ei(ρ log x), where Ei is the analytic continuation of the exponential integral function from negative reals to the complex plane with branch cut along the positive reals. The final integral is equal to the series over the trivial zeros: ∫ x ∞ d t t ( t 2 − 1 ) log ⁡ t = ∫ x ∞ 1 t log ⁡ t ( ∑ m t − 2 m ) d t = ∑ m ∫ x ∞ t − 2 m t log ⁡ t d t = ( u = t − 2 m ) − ∑ m li ⁡ ( x − 2 m ) {\displaystyle \int _{x}^{\infty }{\frac {\mathrm {d} t}{t\left(t^{2}-1\right)\log t}}=\int _{x}^{\infty }{\frac {1}{t\log t}}\left(\sum _{m}t^{-2m}\right)\,\mathrm {d} t=\sum _{m}\int _{x}^{\infty }{\frac {t^{-2m}}{t\log t}}\,\mathrm {d} t\,\,{\overset {\left(u=t^{-2m}\right)}{=}}-\sum _{m}\operatorname {li} \left(x^{-2m}\right)} Thus, Möbius inversion formula gives us π 0 ( x ) = R ⁡ ( x ) − ∑ ρ R ⁡ ( x ρ ) − ∑ m R ⁡ ( x − 2 m ) {\displaystyle \pi _{0}(x)=\operatorname {R} (x)-\sum _{\rho }\operatorname {R} \left(x^{\rho }\right)-\sum _{m}\operatorname {R} \left(x^{-2m}\right)} valid for x > 1, where R ⁡ ( x ) = ∑ n = 1 ∞ μ ( n ) n li ⁡ ( x 1 / n ) = 1 + ∑ k = 1 ∞ ( log ⁡ x ) k k ! k ζ ( k + 1 ) {\displaystyle \operatorname {R} (x)=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n}}\operatorname {li} \left(x^{1/n}\right)=1+\sum _{k=1}^{\infty }{\frac {\left(\log x\right)^{k}}{k!k\zeta (k+1)}}} is Riemann's R-function and μ(n) is the Möbius function. The latter series for it is known as Gram series. Because log x < x for all x > 0, this series converges for all positive x by comparison with the series for ex. The logarithm in the Gram series of the sum over the non-trivial zero contribution should be evaluated as ρ log x and not log xρ. Folkmar Bornemann proved, when assuming the conjecture that all zeros of the Riemann zeta function are simple, that R ⁡ ( e − 2 π t ) = 1 π ∑ k = 1 ∞ ( − 1 ) k − 1 t − 2 k − 1 ( 2 k + 1 ) ζ ( 2 k + 1 ) + 1 2 ∑ ρ t − ρ ρ cos ⁡ π ρ 2 ζ ′ ( ρ ) {\displaystyle \operatorname {R} \left(e^{-2\pi t}\right)={\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}t^{-2k-1}}{(2k+1)\zeta (2k+1)}}+{\frac {1}{2}}\sum _{\rho }{\frac {t^{-\rho }}{\rho \cos {\frac {\pi \rho }{2}}\zeta '(\rho )}}} where ρ runs over the non-trivial zeros of the Riemann zeta function and t > 0. The sum over non-trivial zeta zeros in the formula for π0(x) describes the fluctuations of π0(x) while the remaining terms give the "smooth" part of prime-counting function, so one can use R ⁡ ( x ) − ∑ m = 1 ∞ R ⁡ ( x − 2 m ) {\displaystyle \operatorname {R} (x)-\sum _{m=1}^{\infty }\operatorname {R} \left(x^{-2m}\right)} as a good estimator of π(x) for x > 1. In fact, since the second term approaches 0 as x → ∞, while the amplitude of the "noisy" part is heuristically about ⁠√x/log x⁠, estimating π(x) by R(x) alone is just as good, and fluctuations of the distribution of primes may be clearly represented with the function ( π 0 ( x ) − R ⁡ ( x ) ) log ⁡ x x . {\displaystyle {\bigl (}\pi _{0}(x)-\operatorname {R} (x){\bigr )}{\frac {\log x}{\sqrt {x}}}.} == Inequalities == Ramanujan proved that the inequality π ( x ) 2 < e x log ⁡ x π ( x e ) {\displaystyle \pi (x)^{2}<{\frac {ex}{\log x}}\pi \left({\frac {x}{e}}\right)} holds for all sufficiently large values of x. Here are some useful inequalities for π(x). x log ⁡ x < π ( x ) < 1.25506 x log ⁡ x for x ≥ 17. {\displaystyle {\frac {x}{\log x}}<\pi (x)<1.25506{\frac {x}{\log x}}\quad {\text{for }}x\geq 17.} The left inequality holds for x ≥ 17 and the right inequality holds for x > 1. The constant 1.25506 is 30⁠log 113/113⁠ to 5 decimal places, as π(x) ⁠log x/x⁠ has its maximum value at x = p30 = 113. Pierre Dusart proved in 2010: x log ⁡ x − 1 < π ( x ) < x log ⁡ x − 1.1 for x ≥ 5393 and x ≥ 60184 , respectively. {\displaystyle {\frac {x}{\log x-1}}<\pi (x)<{\frac {x}{\log x-1.1}}\quad {\text{for }}x\geq 5393{\text{ and }}x\geq 60184,{\text{ respectively.}}} More recently, Dusart has proved (Theorem 5.1) that x log ⁡ x ( 1 + 1 log ⁡ x + 2 log 2 ⁡ x ) ≤ π ( x ) ≤ x log ⁡ x ( 1 + 1 log ⁡ x + 2 log 2 ⁡ x + 7.59 log 3 ⁡ x ) , {\displaystyle {\frac {x}{\log x}}\left(1+{\frac {1}{\log x}}+{\frac {2}{\log ^{2}x}}\right)\leq \pi (x)\leq {\frac {x}{\log x}}\left(1+{\frac {1}{\log x}}+{\frac {2}{\log ^{2}x}}+{\frac {7.59}{\log ^{3}x}}\right),} for x ≥ 88789 and x > 1, respectively. Going in the other direction, an approximation for the nth prime, pn, is p n = n ( log ⁡ n + log ⁡ log ⁡ n − 1 + log ⁡ log ⁡ n − 2 log ⁡ n + O ( ( log ⁡ log ⁡ n ) 2 ( log ⁡ n ) 2 ) ) . {\displaystyle p_{n}=n\left(\log n+\log \log n-1+{\frac {\log \log n-2}{\log n}}+O\left({\frac {(\log \log n)^{2}}{(\log n)^{2}}}\right)\right).} Here are some inequalities for the nth prime. The lower bound is due to Dusart (1999) and the upper bound to Rosser (1941). n ( log ⁡ n + log ⁡ log ⁡ n − 1 ) < p n < n ( log ⁡ n + log ⁡ log ⁡ n ) for n ≥ 6. {\displaystyle n(\log n+\log \log n-1)<p_{n}<n(\log n+\log \log n)\quad {\text{for }}n\geq 6.} The left inequality holds for n ≥ 2 and the right inequality holds for n ≥ 6. A variant form sometimes seen substitutes log ⁡ n + log ⁡ log ⁡ n = log ⁡ ( n log ⁡ n ) . {\displaystyle \log n+\log \log n=\log(n\log n).} An even simpler lower bound is n log ⁡ n < p n , {\displaystyle n\log n<p_{n},} which holds for all n ≥ 1, but the lower bound above is tighter for n > ee ≈15.154. In 2010 Dusart proved (Propositions 6.7 and 6.6) that n ( log ⁡ n + log ⁡ log ⁡ n − 1 + log ⁡ log ⁡ n − 2.1 log ⁡ n ) ≤ p n ≤ n ( log ⁡ n + log ⁡ log ⁡ n − 1 + log ⁡ log ⁡ n − 2 log ⁡ n ) , {\displaystyle n\left(\log n+\log \log n-1+{\frac {\log \log n-2.1}{\log n}}\right)\leq p_{n}\leq n\left(\log n+\log \log n-1+{\frac {\log \log n-2}{\log n}}\right),} for n ≥ 3 and n ≥ 688383, respectively. In 2024, Axler further tightened this (equations 1.12 and 1.13) using bounds of the form f ( n , g ( w ) ) = n ( log ⁡ n + log ⁡ log ⁡ n − 1 + log ⁡ log ⁡ n − 2 log ⁡ n − g ( log ⁡ log ⁡ n ) 2 log 2 ⁡ n ) {\displaystyle f(n,g(w))=n\left(\log n+\log \log n-1+{\frac {\log \log n-2}{\log n}}-{\frac {g(\log \log n)}{2\log ^{2}n}}\right)} proving that f ( n , w 2 − 6 w + 11.321 ) ≤ p n ≤ f ( n , w 2 − 6 w ) {\displaystyle f(n,w^{2}-6w+11.321)\leq p_{n}\leq f(n,w^{2}-6w)} for n ≥ 2 and n ≥ 3468, respectively. The lower bound may also be simplified to f(n, w2) without altering its validity. The upper bound may be tightened to f(n, w2 − 6w + 10.667) if n ≥ 46254381. There are additional bounds of varying complexity. == The Riemann hypothesis == The Riemann hypothesis implies a much tighter bound on the error in the estimate for π(x), and hence to a more regular distribution of prime numbers, π ( x ) = li ⁡ ( x ) + O ( x log ⁡ x ) . {\displaystyle \pi (x)=\operatorname {li} (x)+O({\sqrt {x}}\log {x}).} Specifically, | π ( x ) − li ⁡ ( x ) | < x 8 π log ⁡ x , for all x ≥ 2657. {\displaystyle |\pi (x)-\operatorname {li} (x)|<{\frac {\sqrt {x}}{8\pi }}\,\log {x},\quad {\text{for all }}x\geq 2657.} Dudek (2015) proved that the Riemann hypothesis implies that for all x ≥ 2 there is a prime p satisfying x − 4 π x log ⁡ x < p ≤ x . {\displaystyle x-{\frac {4}{\pi }}{\sqrt {x}}\log x<p\leq x.} == See also == Bertrand's postulate Oppermann's conjecture Ramanujan prime == References == === Notes === == External links == Chris Caldwell, The Nth Prime Page at The Prime Pages. Tomás Oliveira e Silva, Tables of prime-counting functions. Dudek, Adrian W. (2015), "On the Riemann hypothesis and the difference between primes", International Journal of Number Theory, 11 (3): 771–778, arXiv:1402.6417, Bibcode:2014arXiv1402.6417D, doi:10.1142/S1793042115500426, ISSN 1793-0421, S2CID 119321107
Wikipedia/Prime-counting_function
Geometry of numbers is the part of number theory which uses geometry for the study of algebraic numbers. Typically, a ring of algebraic integers is viewed as a lattice in R n , {\displaystyle \mathbb {R} ^{n},} and the study of these lattices provides fundamental information on algebraic numbers. Hermann Minkowski (1896) initiated this line of research at the age of 26 in his work The Geometry of Numbers. The geometry of numbers has a close relationship with other fields of mathematics, especially functional analysis and Diophantine approximation, the problem of finding rational numbers that approximate an irrational quantity. == Minkowski's results == Suppose that Γ {\displaystyle \Gamma } is a lattice in n {\displaystyle n} -dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} and K {\displaystyle K} is a convex centrally symmetric body. Minkowski's theorem, sometimes called Minkowski's first theorem, states that if vol ⁡ ( K ) > 2 n vol ⁡ ( R n / Γ ) {\displaystyle \operatorname {vol} (K)>2^{n}\operatorname {vol} (\mathbb {R} ^{n}/\Gamma )} , then K {\displaystyle K} contains a nonzero vector in Γ {\displaystyle \Gamma } . The successive minimum λ k {\displaystyle \lambda _{k}} is defined to be the inf of the numbers λ {\displaystyle \lambda } such that λ K {\displaystyle \lambda K} contains k {\displaystyle k} linearly independent vectors of Γ {\displaystyle \Gamma } . Minkowski's theorem on successive minima, sometimes called Minkowski's second theorem, is a strengthening of his first theorem and states that λ 1 λ 2 ⋯ λ n vol ⁡ ( K ) ≤ 2 n vol ⁡ ( R n / Γ ) . {\displaystyle \lambda _{1}\lambda _{2}\cdots \lambda _{n}\operatorname {vol} (K)\leq 2^{n}\operatorname {vol} (\mathbb {R} ^{n}/\Gamma ).} == Later research in the geometry of numbers == In 1930–1960 research on the geometry of numbers was conducted by many number theorists (including Louis Mordell, Harold Davenport and Carl Ludwig Siegel). In recent years, Lenstra, Brion, and Barvinok have developed combinatorial theories that enumerate the lattice points in some convex bodies. === Subspace theorem of W. M. Schmidt === In the geometry of numbers, the subspace theorem was obtained by Wolfgang M. Schmidt in 1972. It states that if n is a positive integer, and L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then the non-zero integer points x in n coordinates with | L 1 ( x ) ⋯ L n ( x ) | < | x | − ε {\displaystyle |L_{1}(x)\cdots L_{n}(x)|<|x|^{-\varepsilon }} lie in a finite number of proper subspaces of Qn. == Influence on functional analysis == Minkowski's geometry of numbers had a profound influence on functional analysis. Minkowski proved that symmetric convex bodies induce norms in finite-dimensional vector spaces. Minkowski's theorem was generalized to topological vector spaces by Kolmogorov, whose theorem states that the symmetric convex sets that are closed and bounded generate the topology of a Banach space. Researchers continue to study generalizations to star-shaped sets and other non-convex sets. == References == == Bibliography == Matthias Beck, Sinai Robins. Computing the continuous discretely: Integer-point enumeration in polyhedra, Undergraduate Texts in Mathematics, Springer, 2007. Enrico Bombieri; Vaaler, J. (Feb 1983). "On Siegel's lemma". Inventiones Mathematicae. 73 (1): 11–32. Bibcode:1983InMat..73...11B. doi:10.1007/BF01393823. S2CID 121274024. Enrico Bombieri & Walter Gubler (2006). Heights in Diophantine Geometry. Cambridge U. P. J. W. S. Cassels. An Introduction to the Geometry of Numbers. Springer Classics in Mathematics, Springer-Verlag 1997 (reprint of 1959 and 1971 Springer-Verlag editions). John Horton Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, Springer-Verlag, NY, 3rd ed., 1998. R. J. Gardner, Geometric tomography, Cambridge University Press, New York, 1995. Second edition: 2006. P. M. Gruber, Convex and discrete geometry, Springer-Verlag, New York, 2007. P. M. Gruber, J. M. Wills (editors), Handbook of convex geometry. Vol. A. B, North-Holland, Amsterdam, 1993. M. Grötschel, Lovász, L., A. Schrijver: Geometric Algorithms and Combinatorial Optimization, Springer, 1988 Hancock, Harris (1939). Development of the Minkowski Geometry of Numbers. Macmillan. (Republished in 1964 by Dover.) Edmund Hlawka, Johannes Schoißengeier, Rudolf Taschner. Geometric and Analytic Number Theory. Universitext. Springer-Verlag, 1991. Kalton, Nigel J.; Peck, N. Tenney; Roberts, James W. (1984), An F-space sampler, London Mathematical Society Lecture Note Series, 89, Cambridge: Cambridge University Press, pp. xii+240, ISBN 0-521-27585-7, MR 0808777 C. G. Lekkerkererker. Geometry of Numbers. Wolters-Noordhoff, North Holland, Wiley. 1969. Lenstra, A. K.; Lenstra, H. W. Jr.; Lovász, L. (1982). "Factoring polynomials with rational coefficients" (PDF). Mathematische Annalen. 261 (4): 515–534. doi:10.1007/BF01457454. hdl:1887/3810. MR 0682664. S2CID 5701340. Lovász, L.: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986 Malyshev, A.V. (2001) [1994], "Geometry of numbers", Encyclopedia of Mathematics, EMS Press Minkowski, Hermann (1910), Geometrie der Zahlen, Leipzig and Berlin: R. G. Teubner, JFM 41.0239.03, MR 0249269, retrieved 2016-02-28 Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) Schmidt, Wolfgang M. (1996). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467 (2nd ed.). Springer-Verlag. ISBN 3-540-54058-X. Zbl 0754.11020. Siegel, Carl Ludwig (1989). Lectures on the Geometry of Numbers. Springer-Verlag. Rolf Schneider, Convex bodies: the Brunn-Minkowski theory, Cambridge University Press, Cambridge, 1993. Anthony C. Thompson, Minkowski geometry, Cambridge University Press, Cambridge, 1996. Hermann Weyl. Theory of reduction for arithmetical equivalence . Trans. Amer. Math. Soc. 48 (1940) 126–164. doi:10.1090/S0002-9947-1940-0002345-2 Hermann Weyl. Theory of reduction for arithmetical equivalence. II . Trans. Amer. Math. Soc. 51 (1942) 203–231. doi:10.2307/1989946
Wikipedia/Geometric_number_theory
In number theory, the study of Diophantine approximation deals with the approximation of real numbers by rational numbers. It is named after Diophantus of Alexandria. The first problem was to know how well a real number can be approximated by rational numbers. For this problem, a rational number p/q is a "good" approximation of a real number α if the absolute value of the difference between p/q and α may not decrease if p/q is replaced by another rational number with a smaller denominator. This problem was solved during the 18th century by means of simple continued fractions. Knowing the "best" approximations of a given number, the main problem of the field is to find sharp upper and lower bounds of the above difference, expressed as a function of the denominator. It appears that these bounds depend on the nature of the real numbers to be approximated: the lower bound for the approximation of a rational number by another rational number is larger than the lower bound for algebraic numbers, which is itself larger than the lower bound for all real numbers. Thus a real number that may be better approximated than the bound for algebraic numbers is certainly a transcendental number. This knowledge enabled Liouville, in 1844, to produce the first explicit transcendental number. Later, the proofs that π and e are transcendental were obtained by a similar method. Diophantine approximations and transcendental number theory are very close areas that share many theorems and methods. Diophantine approximations also have important applications in the study of Diophantine equations. The 2022 Fields Medal was awarded to James Maynard, in part for his work on Diophantine approximation. == Best Diophantine approximations of a real number == Given a real number α, there are two ways to define a best Diophantine approximation of α. For the first definition, the rational number p/q is a best Diophantine approximation of α if | α − p q | < | α − p ′ q ′ | , {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<\left|\alpha -{\frac {p'}{q'}}\right|,} for every rational number p'/q' different from p/q such that 0 < q′ ≤ q. For the second definition, the above inequality is replaced by | q α − p | < | q ′ α − p ′ | . {\displaystyle \left|q\alpha -p\right|<\left|q^{\prime }\alpha -p^{\prime }\right|.} A best approximation for the second definition is also a best approximation for the first one, but the converse is not true in general. The theory of continued fractions allows us to compute the best approximations of a real number: for the second definition, they are the convergents of its expression as a regular continued fraction. For the first definition, one has to consider also the semiconvergents. For example, the constant e = 2.718281828459045235... has the (regular) continued fraction representation [ 2 ; 1 , 2 , 1 , 1 , 4 , 1 , 1 , 6 , 1 , 1 , 8 , 1 , … ] . {\displaystyle [2;1,2,1,1,4,1,1,6,1,1,8,1,\ldots \;].} Its best approximations for the second definition are 3 , 8 3 , 11 4 , 19 7 , 87 32 , … , {\displaystyle 3,{\tfrac {8}{3}},{\tfrac {11}{4}},{\tfrac {19}{7}},{\tfrac {87}{32}},\ldots \,,} while, for the first definition, they are 3 , 5 2 , 8 3 , 11 4 , 19 7 , 49 18 , 68 25 , 87 32 , 106 39 , … . {\displaystyle 3,{\tfrac {5}{2}},{\tfrac {8}{3}},{\tfrac {11}{4}},{\tfrac {19}{7}},{\tfrac {49}{18}},{\tfrac {68}{25}},{\tfrac {87}{32}},{\tfrac {106}{39}},\ldots \,.} == Measure of the accuracy of approximations == The obvious measure of the accuracy of a Diophantine approximation of a real number α by a rational number p/q is | α − p q | . {\textstyle \left|\alpha -{\frac {p}{q}}\right|.} However, this quantity can always be made arbitrarily small by increasing the absolute values of p and q; thus the accuracy of the approximation is usually estimated by comparing this quantity to some function φ of the denominator q, typically a negative power of it. For such a comparison, one may want upper bounds or lower bounds of the accuracy. A lower bound is typically described by a theorem like "for every element α of some subset of the real numbers and every rational number p/q, we have | α − p q | > ϕ ( q ) {\textstyle \left|\alpha -{\frac {p}{q}}\right|>\phi (q)} ". In some cases, "every rational number" may be replaced by "all rational numbers except a finite number of them", which amounts to multiplying φ by some constant depending on α. For upper bounds, one has to take into account that not all the "best" Diophantine approximations provided by the convergents may have the desired accuracy. Therefore, the theorems take the form "for every element α of some subset of the real numbers, there are infinitely many rational numbers p/q such that | α − p q | < ϕ ( q ) {\textstyle \left|\alpha -{\frac {p}{q}}\right|<\phi (q)} ". === Badly approximable numbers === A badly approximable number is an x for which there is a positive constant c such that for all rational p/q we have | x − p q | > c q 2 . {\displaystyle \left|{x-{\frac {p}{q}}}\right|>{\frac {c}{q^{2}}}\ .} The badly approximable numbers are precisely those with bounded partial quotients. Equivalently, a number is badly approximable if and only if its Markov constant is finite or equivalently its simple continued fraction is bounded. == Lower bounds for Diophantine approximations == === Approximation of a rational by other rationals === A rational number α = a b {\textstyle \alpha ={\frac {a}{b}}} may be obviously and perfectly approximated by p i q i = i a i b {\textstyle {\frac {p_{i}}{q_{i}}}={\frac {i\,a}{i\,b}}} for every positive integer i. If p q ≠ α = a b , {\textstyle {\frac {p}{q}}\not =\alpha ={\frac {a}{b}}\,,} we have | a b − p q | = | a q − b p b q | ≥ 1 b q , {\displaystyle \left|{\frac {a}{b}}-{\frac {p}{q}}\right|=\left|{\frac {aq-bp}{bq}}\right|\geq {\frac {1}{bq}},} because | a q − b p | {\displaystyle |aq-bp|} is a positive integer and is thus not lower than 1. Thus the accuracy of the approximation is bad relative to irrational numbers (see next sections). It may be remarked that the preceding proof uses a variant of the pigeonhole principle: a non-negative integer that is not 0 is not smaller than 1. This apparently trivial remark is used in almost every proof of lower bounds for Diophantine approximations, even the most sophisticated ones. In summary, a rational number is perfectly approximated by itself, but is badly approximated by any other rational number. === Approximation of algebraic numbers, Liouville's result === In the 1840s, Joseph Liouville obtained the first lower bound for the approximation of algebraic numbers: If x is an irrational algebraic number of degree n over the rational numbers, then there exists a constant c(x) > 0 such that | x − p q | > c ( x ) q n {\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x)}{q^{n}}}} holds for all integers p and q where q > 0. This result allowed him to produce the first proven example of a transcendental number, the Liouville constant ∑ j = 1 ∞ 10 − j ! = 0.110001000000000000000001000 … , {\displaystyle \sum _{j=1}^{\infty }10^{-j!}=0.110001000000000000000001000\ldots \,,} which does not satisfy Liouville's theorem, whichever degree n is chosen. This link between Diophantine approximations and transcendental number theory continues to the present day. Many of the proof techniques are shared between the two areas. === Approximation of algebraic numbers, Thue–Siegel–Roth theorem === Over more than a century, there were many efforts to improve Liouville's theorem: every improvement of the bound enables us to prove that more numbers are transcendental. The main improvements are due to Axel Thue (1909), Siegel (1921), Freeman Dyson (1947), and Klaus Roth (1955), leading finally to the Thue–Siegel–Roth theorem: If x is an irrational algebraic number and ε > 0, then there exists a positive real number c(x, ε) such that | x − p q | > c ( x , ε ) q 2 + ε {\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x,\varepsilon )}{q^{2+\varepsilon }}}} holds for every integer p and q such that q > 0. In some sense, this result is optimal, as the theorem would be false with ε = 0. This is an immediate consequence of the upper bounds described below. === Simultaneous approximations of algebraic numbers === Subsequently, Wolfgang M. Schmidt generalized this to the case of simultaneous approximations, proving that: If x1, ..., xn are algebraic numbers such that 1, x1, ..., xn are linearly independent over the rational numbers and ε is any given positive real number, then there are only finitely many rational n-tuples (p1/q, ..., pn/q) such that | x i − p i q | < q − ( 1 + 1 / n + ε ) , i = 1 , … , n . {\displaystyle \left|x_{i}-{\frac {p_{i}}{q}}\right|<q^{-(1+1/n+\varepsilon )},\quad i=1,\ldots ,n.} Again, this result is optimal in the sense that one may not remove ε from the exponent. === Effective bounds === All preceding lower bounds are not effective, in the sense that the proofs do not provide any way to compute the constant implied in the statements. This means that one cannot use the results or their proofs to obtain bounds on the size of solutions of related Diophantine equations. However, these techniques and results can often be used to bound the number of solutions of such equations. Nevertheless, a refinement of Baker's theorem by Feldman provides an effective bound: if x is an algebraic number of degree n over the rational numbers, then there exist effectively computable constants c(x) > 0 and 0 < d(x) < n such that | x − p q | > c ( x ) | q | d ( x ) {\displaystyle \left|x-{\frac {p}{q}}\right|>{\frac {c(x)}{|q|^{d(x)}}}} holds for all rational integers. However, as for every effective version of Baker's theorem, the constants d and 1/c are so large that this effective result cannot be used in practice. == Upper bounds for Diophantine approximations == === General upper bound === The first important result about upper bounds for Diophantine approximations is Dirichlet's approximation theorem, which implies that, for every irrational number α, there are infinitely many fractions p q {\displaystyle {\tfrac {p}{q}}\;} such that | α − p q | < 1 q 2 . {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{q^{2}}}\,.} This implies immediately that one cannot suppress the ε in the statement of Thue-Siegel-Roth theorem. Adolf Hurwitz (1891) strengthened this result, proving that for every irrational number α, there are infinitely many fractions p q {\displaystyle {\tfrac {p}{q}}\;} such that | α − p q | < 1 5 q 2 . {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{{\sqrt {5}}q^{2}}}\,.} Therefore, 1 5 q 2 {\displaystyle {\frac {1}{{\sqrt {5}}\,q^{2}}}} is an upper bound for the Diophantine approximations of any irrational number. The constant in this result may not be further improved without excluding some irrational numbers (see below). Émile Borel (1903) showed that, in fact, given any irrational number α, and given three consecutive convergents of α, at least one must satisfy the inequality given in Hurwitz's Theorem. === Equivalent real numbers === Definition: Two real numbers x , y {\displaystyle x,y} are called equivalent if there are integers a , b , c , d {\displaystyle a,b,c,d\;} with a d − b c = ± 1 {\displaystyle ad-bc=\pm 1\;} such that: y = a x + b c x + d . {\displaystyle y={\frac {ax+b}{cx+d}}\,.} So equivalence is defined by an integer Möbius transformation on the real numbers, or by a member of the Modular group SL 2 ± ( Z ) {\displaystyle {\text{SL}}_{2}^{\pm }(\mathbb {Z} )} , the set of invertible 2 × 2 matrices over the integers. Each rational number is equivalent to 0; thus the rational numbers are an equivalence class for this relation. The equivalence may be read on the regular continued fraction representation, as shown by the following theorem of Serret: Theorem: Two irrational numbers x and y are equivalent if and only if there exist two positive integers h and k such that the regular continued fraction representations of x and y x = [ u 0 ; u 1 , u 2 , … ] , y = [ v 0 ; v 1 , v 2 , … ] , {\displaystyle {\begin{aligned}x&=[u_{0};u_{1},u_{2},\ldots ]\,,\\y&=[v_{0};v_{1},v_{2},\ldots ]\,,\end{aligned}}} satisfy u h + i = v k + i {\displaystyle u_{h+i}=v_{k+i}} for every non negative integer i. Thus, except for a finite initial sequence, equivalent numbers have the same continued fraction representation. Equivalent numbers are approximable to the same degree, in the sense that they have the same Markov constant. === Lagrange spectrum === As said above, the constant in Borel's theorem may not be improved, as shown by Adolf Hurwitz in 1891. Let ϕ = 1 + 5 2 {\displaystyle \phi ={\tfrac {1+{\sqrt {5}}}{2}}} be the golden ratio. Then for any real constant c with c > 5 {\displaystyle c>{\sqrt {5}}\;} there are only a finite number of rational numbers p/q such that | ϕ − p q | < 1 c q 2 . {\displaystyle \left|\phi -{\frac {p}{q}}\right|<{\frac {1}{c\,q^{2}}}.} Hence an improvement can only be achieved, if the numbers which are equivalent to ϕ {\displaystyle \phi } are excluded. More precisely: For every irrational number α {\displaystyle \alpha } , which is not equivalent to ϕ {\displaystyle \phi } , there are infinite many fractions p q {\displaystyle {\tfrac {p}{q}}\;} such that | α − p q | < 1 8 q 2 . {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{{\sqrt {8}}q^{2}}}.} By successive exclusions — next one must exclude the numbers equivalent to 2 {\displaystyle {\sqrt {2}}} — of more and more classes of equivalence, the lower bound can be further enlarged. The values which may be generated in this way are Lagrange numbers, which are part of the Lagrange spectrum. They converge to the number 3 and are related to the Markov numbers. == Khinchin's theorem on metric Diophantine approximation and extensions == Let ψ {\displaystyle \psi } be a positive real-valued function on positive integers (i.e., a positive sequence) such that q ψ ( q ) {\displaystyle q\psi (q)} is non-increasing. A real number x (not necessarily algebraic) is called ψ {\displaystyle \psi } -approximable if there exist infinitely many rational numbers p/q such that | x − p q | < ψ ( q ) | q | . {\displaystyle \left|x-{\frac {p}{q}}\right|<{\frac {\psi (q)}{|q|}}.} Aleksandr Khinchin proved in 1926 that if the series ∑ q ψ ( q ) {\textstyle \sum _{q}\psi (q)} diverges, then almost every real number (in the sense of Lebesgue measure) is ψ {\displaystyle \psi } -approximable, and if the series converges, then almost every real number is not ψ {\displaystyle \psi } -approximable. The circle of ideas surrounding this theorem and its relatives is known as metric Diophantine approximation or the metric theory of Diophantine approximation (not to be confused with height "metrics" in Diophantine geometry) or metric number theory. Duffin & Schaeffer (1941) proved a generalization of Khinchin's result, and posed what is now known as the Duffin–Schaeffer conjecture on the analogue of Khinchin's dichotomy for general, not necessarily decreasing, sequences ψ {\displaystyle \psi } . Beresnevich & Velani (2006) proved that a Hausdorff measure analogue of the Duffin–Schaeffer conjecture is equivalent to the original Duffin–Schaeffer conjecture, which is a priori weaker. In July 2019, Dimitris Koukoulopoulos and James Maynard announced a proof of the conjecture. === Hausdorff dimension of exceptional sets === An important example of a function ψ {\displaystyle \psi } to which Khinchin's theorem can be applied is the function ψ c ( q ) = q − c {\displaystyle \psi _{c}(q)=q^{-c}} , where c > 1 is a real number. For this function, the relevant series converges and so Khinchin's theorem tells us that almost every point is not ψ c {\displaystyle \psi _{c}} -approximable. Thus, the set of numbers which are ψ c {\displaystyle \psi _{c}} -approximable forms a subset of the real line of Lebesgue measure zero. The Jarník-Besicovitch theorem, due to V. Jarník and A. S. Besicovitch, states that the Hausdorff dimension of this set is equal to 1 / c {\displaystyle 1/c} . In particular, the set of numbers which are ψ c {\displaystyle \psi _{c}} -approximable for some c > 1 {\displaystyle c>1} (known as the set of very well approximable numbers) has Hausdorff dimension one, while the set of numbers which are ψ c {\displaystyle \psi _{c}} -approximable for all c > 1 {\displaystyle c>1} (known as the set of Liouville numbers) has Hausdorff dimension zero. Another important example is the function ψ ε ( q ) = ε q − 1 {\displaystyle \psi _{\varepsilon }(q)=\varepsilon q^{-1}} , where ε > 0 {\displaystyle \varepsilon >0} is a real number. For this function, the relevant series diverges and so Khinchin's theorem tells us that almost every number is ψ ε {\displaystyle \psi _{\varepsilon }} -approximable. This is the same as saying that every such number is well approximable, where a number is called well approximable if it is not badly approximable. So an appropriate analogue of the Jarník-Besicovitch theorem should concern the Hausdorff dimension of the set of badly approximable numbers. And indeed, V. Jarník proved that the Hausdorff dimension of this set is equal to one. This result was improved by W. M. Schmidt, who showed that the set of badly approximable numbers is incompressible, meaning that if f 1 , f 2 , … {\displaystyle f_{1},f_{2},\ldots } is a sequence of bi-Lipschitz maps, then the set of numbers x for which f 1 ( x ) , f 2 ( x ) , … {\displaystyle f_{1}(x),f_{2}(x),\ldots } are all badly approximable has Hausdorff dimension one. Schmidt also generalized Jarník's theorem to higher dimensions, a significant achievement because Jarník's argument is essentially one-dimensional, depending on the apparatus of continued fractions. == Uniform distribution == Another topic that has seen a thorough development is the theory of uniform distribution mod 1. Take a sequence a1, a2, ... of real numbers and consider their fractional parts. That is, more abstractly, look at the sequence in R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , which is a circle. For any interval I on the circle we look at the proportion of the sequence's elements that lie in it, up to some integer N, and compare it to the proportion of the circumference occupied by I. Uniform distribution means that in the limit, as N grows, the proportion of hits on the interval tends to the 'expected' value. Hermann Weyl proved a basic result showing that this was equivalent to bounds for exponential sums formed from the sequence. This showed that Diophantine approximation results were closely related to the general problem of cancellation in exponential sums, which occurs throughout analytic number theory in the bounding of error terms. Related to uniform distribution is the topic of irregularities of distribution, which is of a combinatorial nature. == Algorithms == Grotschel, Lovasz and Schrijver describe algorithms for finding approximately-best diophantine approximations, both for individual real numbers and for set of real numbers. The latter problem is called simultaneous diophantine approximation.: Sec. 5.2  == Unsolved problems == There are still simply stated unsolved problems remaining in Diophantine approximation, for example the Littlewood conjecture and the lonely runner conjecture. It is also unknown if there are algebraic numbers with unbounded coefficients in their continued fraction expansion. == Recent developments == In his plenary address at the International Mathematical Congress in Kyoto (1990), Grigory Margulis outlined a broad program rooted in ergodic theory that allows one to prove number-theoretic results using the dynamical and ergodic properties of actions of subgroups of semisimple Lie groups. The work of D. Kleinbock, G. Margulis and their collaborators demonstrated the power of this novel approach to classical problems in Diophantine approximation. Among its notable successes are the proof of the decades-old Oppenheim conjecture by Margulis, with later extensions by Dani and Margulis and Eskin–Margulis–Mozes, and the proof of Baker and Sprindzhuk conjectures in the Diophantine approximations on manifolds by Kleinbock and Margulis. Various generalizations of the above results of Aleksandr Khinchin in metric Diophantine approximation have also been obtained within this framework. == See also == Davenport–Schmidt theorem Duffin–Schaeffer theorem Heilbronn set Low-discrepancy sequence == Notes == == References == == External links == Diophantine Approximation: historical survey Archived 2012-02-14 at the Wayback Machine. From Introduction to Diophantine methods course by Michel Waldschmidt. "Diophantine approximations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Diophantine_approximations
Pell's equation, also called the Pell–Fermat equation, is any Diophantine equation of the form x 2 − n y 2 = 1 , {\displaystyle x^{2}-ny^{2}=1,} where n is a given positive nonsquare integer, and integer solutions are sought for x and y. In Cartesian coordinates, the equation is represented by a hyperbola; solutions occur wherever the curve passes through a point whose x and y coordinates are both integers, such as the trivial solution with x = 1 and y = 0. Joseph Louis Lagrange proved that, as long as n is not a perfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accurately approximate the square root of n by rational numbers of the form x/y. This equation was first studied extensively in India starting with Brahmagupta, who found an integer solution to 92 x 2 + 1 = y 2 {\displaystyle 92x^{2}+1=y^{2}} in his Brāhmasphuṭasiddhānta circa 628. Bhaskara II in the 12th century and Narayana Pandit in the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing the chakravala method, building on the work of Jayadeva and Brahmagupta. Solutions to specific examples of Pell's equation, such as the Pell numbers arising from the equation with n = 2, had been known for much longer, since the time of Pythagoras in Greece and a similar date in India. William Brouncker was the first European to solve Pell's equation. The name of Pell's equation arose from Leonhard Euler mistakenly attributing Brouncker's solution of the equation to John Pell. == History == As early as 400 BC in India and Greece, mathematicians studied the numbers arising from the n = 2 case of Pell's equation, x 2 − 2 y 2 = 1 , {\displaystyle x^{2}-2y^{2}=1,} and from the closely related equation x 2 − 2 y 2 = − 1 {\displaystyle x^{2}-2y^{2}=-1} because of the connection of these equations to the square root of 2. Indeed, if x and y are positive integers satisfying this equation, then x/y is an approximation of √2. The numbers x and y appearing in these approximations, called side and diameter numbers, were known to the Pythagoreans, and Proclus observed that in the opposite direction these numbers obeyed one of these two equations. Similarly, Baudhayana discovered that x = 17, y = 12 and x = 577, y = 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2. Later, Archimedes approximated the square root of 3 by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation. Likewise, Archimedes's cattle problem — an ancient word problem about finding the number of cattle belonging to the sun god Helios — can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter to Eratosthenes, and the attribution to Archimedes is generally accepted today. Around AD 250, Diophantus considered the equation a 2 x 2 + c = y 2 , {\displaystyle a^{2}x^{2}+c=y^{2},} where a and c are fixed numbers, and x and y are the variables to be solved for. This equation is different in form from Pell's equation but equivalent to it. Diophantus solved the equation for (a, c) equal to (1, 1), (1, −1), (1, 12), and (3, 9). Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus. In Indian mathematics, Brahmagupta discovered that ( x 1 2 − N y 1 2 ) ( x 2 2 − N y 2 2 ) = ( x 1 x 2 + N y 1 y 2 ) 2 − N ( x 1 y 2 + x 2 y 1 ) 2 , {\displaystyle (x_{1}^{2}-Ny_{1}^{2})(x_{2}^{2}-Ny_{2}^{2})=(x_{1}x_{2}+Ny_{1}y_{2})^{2}-N(x_{1}y_{2}+x_{2}y_{1})^{2},} a form of what is now known as Brahmagupta's identity. Using this, he was able to "compose" triples ( x 1 , y 1 , k 1 ) {\displaystyle (x_{1},y_{1},k_{1})} and ( x 2 , y 2 , k 2 ) {\displaystyle (x_{2},y_{2},k_{2})} that were solutions of x 2 − N y 2 = k {\displaystyle x^{2}-Ny^{2}=k} , to generate the new triples ( x 1 x 2 + N y 1 y 2 , x 1 y 2 + x 2 y 1 , k 1 k 2 ) {\displaystyle (x_{1}x_{2}+Ny_{1}y_{2},x_{1}y_{2}+x_{2}y_{1},k_{1}k_{2})} and ( x 1 x 2 − N y 1 y 2 , x 1 y 2 − x 2 y 1 , k 1 k 2 ) . {\displaystyle (x_{1}x_{2}-Ny_{1}y_{2},x_{1}y_{2}-x_{2}y_{1},k_{1}k_{2}).} Not only did this give a way to generate infinitely many solutions to x 2 − N y 2 = 1 {\displaystyle x^{2}-Ny^{2}=1} starting with one solution, but also, by dividing such a composition by k 1 k 2 {\displaystyle k_{1}k_{2}} , integer or "nearly integer" solutions could often be obtained. For instance, for N = 92 {\displaystyle N=92} , Brahmagupta composed the triple (10, 1, 8) (since 10 2 − 92 ( 1 2 ) = 8 {\displaystyle 10^{2}-92(1^{2})=8} ) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" for x {\displaystyle x} and y {\displaystyle y} ) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution of x 2 − N y 2 = k {\displaystyle x^{2}-Ny^{2}=k} for k = ±1, ±2, or ±4. The first general method for solving the Pell's equation (for all N) was given by Bhāskara II in 1150, extending the methods of Brahmagupta. Called the chakravala (cyclic) method, it starts by choosing two relatively prime integers a {\displaystyle a} and b {\displaystyle b} , then composing the triple ( a , b , k ) {\displaystyle (a,b,k)} (that is, one which satisfies a 2 − N b 2 = k {\displaystyle a^{2}-Nb^{2}=k} ) with the trivial triple ( m , 1 , m 2 − N ) {\displaystyle (m,1,m^{2}-N)} to get the triple ( a m + N b , a + b m , k ( m 2 − N ) ) {\displaystyle {\big (}am+Nb,a+bm,k(m^{2}-N){\big )}} , which can be scaled down to ( a m + N b k , a + b m k , m 2 − N k ) . {\displaystyle \left({\frac {am+Nb}{k}},{\frac {a+bm}{k}},{\frac {m^{2}-N}{k}}\right).} When m {\displaystyle m} is chosen so that a + b m k {\displaystyle {\frac {a+bm}{k}}} is an integer, so are the other two numbers in the triple. Among such m {\displaystyle m} , the method chooses one that minimizes m 2 − N k {\displaystyle {\frac {m^{2}-N}{k}}} and repeats the process. This method always terminates with a solution. Bhaskara used it to give the solution x = 1766319049, y = 226153980 to the N = 61 case. Several European mathematicians rediscovered how to solve Pell's equation in the 17th century. Pierre de Fermat found how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians. In a letter to Kenelm Digby, Bernard Frénicle de Bessy said that Fermat found the smallest solution for N up to 150 and challenged John Wallis to solve the cases N = 151 or 313. Both Wallis and William Brouncker gave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker. John Pell's connection with the equation is that he revised Thomas Branker's translation of Johann Rahn's 1659 book Teutsche Algebra into English, with a discussion of Brouncker's solution of the equation. Leonhard Euler mistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell. The general theory of Pell's equation, based on continued fractions and algebraic manipulations with numbers of the form P + Q a , {\displaystyle P+Q{\sqrt {a}},} was developed by Lagrange in 1766–1769. In particular, Lagrange gave a proof that the Brouncker–Wallis algorithm always terminates. == Solutions == === Fundamental solution via continued fractions === Let h i / k i {\displaystyle h_{i}/k_{i}} denote the unique sequence of convergents of the regular continued fraction for n {\displaystyle {\sqrt {n}}} . Then the pair of positive integers ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. The sequence of integers [ a 0 ; a 1 , a 2 , … ] {\displaystyle [a_{0};a_{1},a_{2},\ldots ]} in the regular continued fraction of n {\displaystyle {\sqrt {n}}} is always eventually periodic. It can be written in the form [ ⌊ n ⌋ ; a 1 , a 2 , … , a r − 1 , 2 ⌊ n ⌋ ¯ ] {\displaystyle \left[\lfloor {\sqrt {n}}\rfloor ;\;{\overline {a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }}\right]} , where ⌊ ⋅ ⌋ {\displaystyle \lfloor \,\cdot \,\rfloor } denotes integer floor, and the sequence a 1 , a 2 , … , a r − 1 , 2 ⌊ n ⌋ {\displaystyle a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor } repeats infinitely. Moreover, the tuple ( a 1 , a 2 , … , a r − 1 ) {\displaystyle (a_{1},a_{2},\ldots ,a_{r-1})} is palindromic, the same left-to-right or right-to-left. The fundamental solution is ( x 1 , y 1 ) = { ( h r − 1 , k r − 1 ) , for r even ( h 2 r − 1 , k 2 r − 1 ) , for r odd {\displaystyle (x_{1},y_{1})={\begin{cases}(h_{r-1},k_{r-1}),&{\text{ for }}r{\text{ even}}\\(h_{2r-1},k_{2r-1}),&{\text{ for }}r{\text{ odd}}\end{cases}}} The computation time for finding the fundamental solution using the continued fraction method, with the aid of the Schönhage–Strassen algorithm for fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} . However, this is not a polynomial-time algorithm because the number of digits in the solution may be as large as √n, far larger than a polynomial in the number of digits in the input value n. === Additional solutions from the fundamental solution === Once the fundamental solution is found, all remaining solutions may be calculated algebraically from x k + y k n = ( x 1 + y 1 n ) k , {\displaystyle x_{k}+y_{k}{\sqrt {n}}=(x_{1}+y_{1}{\sqrt {n}})^{k},} expanding the right side, equating coefficients of n {\displaystyle {\sqrt {n}}} on both sides, and equating the other terms on both sides. This yields the recurrence relations x k + 1 = x 1 x k + n y 1 y k , {\displaystyle x_{k+1}=x_{1}x_{k}+ny_{1}y_{k},} y k + 1 = x 1 y k + y 1 x k . {\displaystyle y_{k+1}=x_{1}y_{k}+y_{1}x_{k}.} === Concise representation and faster algorithms === Although writing out the fundamental solution (x1, y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the form x 1 + y 1 n = ∏ i = 1 t ( a i + b i n ) c i {\displaystyle x_{1}+y_{1}{\sqrt {n}}=\prod _{i=1}^{t}\left(a_{i}+b_{i}{\sqrt {n}}\right)^{c_{i}}} using much smaller integers ai, bi, and ci. For instance, Archimedes' cattle problem is equivalent to the Pell equation x 2 − 410 286 423 278 424 y 2 = 1 {\displaystyle x^{2}-410\,286\,423\,278\,424\ y^{2}=1} , the fundamental solution of which has 206545 digits if written out explicitly. However, the solution is also equal to x 1 + y 1 n = u 2329 , {\displaystyle x_{1}+y_{1}{\sqrt {n}}=u^{2329},} where u = x 1 ′ + y 1 ′ 4 729 494 = ( 300 426 607 914 281 713 365 609 + 84 129 507 677 858 393 258 7766 ) 2 {\displaystyle u=x'_{1}+y'_{1}{\sqrt {4\,729\,494}}=(300\,426\,607\,914\,281\,713\,365\ {\sqrt {609}}+84\,129\,507\,677\,858\,393\,258\ {\sqrt {7766}})^{2}} and x 1 ′ {\displaystyle x'_{1}} and y 1 ′ {\displaystyle y'_{1}} only have 45 and 41 decimal digits respectively. Methods related to the quadratic sieve approach for integer factorization may be used to collect relations between prime numbers in the number field generated by √n and to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of the generalized Riemann hypothesis, it can be shown to take time exp ⁡ O ( log ⁡ N ⋅ log ⁡ log ⁡ N ) , {\displaystyle \exp O\left({\sqrt {\log N\cdot \log \log N}}\right),} where N = log n is the input size, similarly to the quadratic sieve. === Quantum algorithms === Hallgren showed that a quantum computer can find a product representation, as described above, for the solution to Pell's equation in polynomial time. Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a real quadratic number field, was extended to more general fields by Schmidt and Völlmer. == Example == As an example, consider the instance of Pell's equation for n = 7; that is, x 2 − 7 y 2 = 1. {\displaystyle x^{2}-7y^{2}=1.} The continued fraction of 7 {\displaystyle {\sqrt {7}}} has the form [ 2 ; 1 , 1 , 1 , 4 ¯ ] {\displaystyle [2;\ {\overline {1,1,1,4}}]} . Since the period has length 4 {\displaystyle 4} , which is an even number, the convergent producing the fundamental solution is obtained by truncating the continued fraction right before the end of the first occurrence of the period: [ 2 ; 1 , 1 , 1 ] = 8 3 {\displaystyle [2;\ 1,1,1]={\frac {8}{3}}} . The sequence of convergents for the square root of seven are Applying the recurrence formula to this solution generates the infinite sequence of solutions (1, 0); (8, 3); (127, 48); (2024, 765); (32257, 12192); (514088, 194307); (8193151, 3096720); (130576328, 49353213); ... (sequence A001081 (x) and A001080 (y) in OEIS) For the Pell's equation x 2 − 13 y 2 = 1 , {\displaystyle x^{2}-13y^{2}=1,} the continued fraction 13 = [ 3 ; 1 , 1 , 1 , 1 , 6 ¯ ] {\displaystyle {\sqrt {13}}=[3;\ {\overline {1,1,1,1,6}}]} has a period of odd length. For this the fundamental solution is obtained by truncating the continued fraction right before the second occurrence of the period [ 3 ; 1 , 1 , 1 , 1 , 6 , 1 , 1 , 1 , 1 ] = 649 180 {\displaystyle [3;\ 1,1,1,1,6,1,1,1,1]={\frac {649}{180}}} . Thus, the fundamental solution is ( x 1 , y 1 ) = ( 649 , 180 ) {\displaystyle (x_{1},y_{1})=(649,180)} . The smallest solution can be very large. For example, the smallest solution to x 2 − 313 y 2 = 1 {\displaystyle x^{2}-313y^{2}=1} is (32188120829134849, 1819380158564160), and this is the equation which Frenicle challenged Wallis to solve. Values of n such that the smallest solution of x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} is greater than the smallest solution for any smaller value of n are 1, 2, 5, 10, 13, 29, 46, 53, 61, 109, 181, 277, 397, 409, 421, 541, 661, 1021, 1069, 1381, 1549, 1621, 2389, 3061, 3469, 4621, 4789, 4909, 5581, 6301, 6829, 8269, 8941, 9949, ... (sequence A033316 in the OEIS). (For these records, see OEIS: A033315 for x and OEIS: A033319 for y.) == List of fundamental solutions of Pell's equations == The following is a list of the fundamental solution to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} with n ≤ 128. When n is an integer square, there is no solution except for the trivial solution (1, 0). The values of x are sequence A002350 and those of y are sequence A002349 in OEIS. == Connections == Pell's equation has connections to several other important subjects in mathematics. === Algebraic number theory === Pell's equation is closely related to the theory of algebraic numbers, as the formula x 2 − n y 2 = ( x + y n ) ( x − y n ) {\displaystyle x^{2}-ny^{2}=(x+y{\sqrt {n}})(x-y{\sqrt {n}})} is the norm for the ring Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} and for the closely related quadratic field Q ( n ) {\displaystyle \mathbb {Q} ({\sqrt {n}})} . Thus, a pair of integers ( x , y ) {\displaystyle (x,y)} solves Pell's equation if and only if x + y n {\displaystyle x+y{\sqrt {n}}} is a unit with norm 1 in Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} . Dirichlet's unit theorem, that all units of Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} can be expressed as powers of a single fundamental unit (and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution. The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. === Chebyshev polynomials === Demeyer mentions a connection between Pell's equation and the Chebyshev polynomials: If T i ( x ) {\displaystyle T_{i}(x)} and U i ( x ) {\displaystyle U_{i}(x)} are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in any polynomial ring R [ x ] {\displaystyle R[x]} , with n = x 2 − 1 {\displaystyle n=x^{2}-1} : T i 2 − ( x 2 − 1 ) U i − 1 2 = 1. {\displaystyle T_{i}^{2}-(x^{2}-1)U_{i-1}^{2}=1.} Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution: T i + U i − 1 x 2 − 1 = ( x + x 2 − 1 ) i . {\displaystyle T_{i}+U_{i-1}{\sqrt {x^{2}-1}}=(x+{\sqrt {x^{2}-1}})^{i}.} It may further be observed that if ( x i , y i ) {\displaystyle (x_{i},y_{i})} are the solutions to any integer Pell's equation, then x i = T i ( x 1 ) {\displaystyle x_{i}=T_{i}(x_{1})} and y i = y 1 U i − 1 ( x 1 ) {\displaystyle y_{i}=y_{1}U_{i-1}(x_{1})} . === Continued fractions === A general development of solutions of Pell's equation x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} in terms of continued fractions of n {\displaystyle {\sqrt {n}}} can be presented, as the solutions x and y are approximates to the square root of n and thus are a special case of continued fraction approximations for quadratic irrationals. The relationship to the continued fractions implies that the solutions to Pell's equation form a semigroup subset of the modular group. Thus, for example, if p and q satisfy Pell's equation, then ( p q n q p ) {\displaystyle {\begin{pmatrix}p&q\\nq&p\end{pmatrix}}} is a matrix of unit determinant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: If pk−1/qk−1 and pk/qk are two successive convergents of a continued fraction, then the matrix ( p k − 1 p k q k − 1 q k ) {\displaystyle {\begin{pmatrix}p_{k-1}&p_{k}\\q_{k-1}&q_{k}\end{pmatrix}}} has determinant (−1)k. === Smooth numbers === Størmer's theorem applies Pell equations to find pairs of consecutive smooth numbers, positive integers whose prime factors are all smaller than a given value. As part of this theory, Størmer also investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has a prime factor that does not divide n. == The negative Pell's equation == The negative Pell's equation is given by x 2 − n y 2 = − 1 {\displaystyle x^{2}-ny^{2}=-1} and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. A necessary (but not sufficient) condition for solvability is that n is not divisible by 4 or by a prime of form 4k + 3. Thus, for example, x2 − 3 y2 = −1 is never solvable, but x2 − 5 y2 = −1 may be. The first few numbers n for which x2 − n y2 = −1 is solvable are 1 (with only one trivial solution) and 2, 5, 10, 13, 17, 26, 29, 37, 41, 50, 53, 58, 61, 65, 73, 74, 82, 85, 89, 97, ... (sequence A031396 in the OEIS) with infinitely many solutions. The solutions of the negative Pell's equation for 1 ≤ n ≤ 298 {\displaystyle 1\leq n\leq 298} are: Let α = Π j is odd ( 1 − 2 j ) {\displaystyle \alpha =\Pi _{j{\text{ is odd}}}(1-2^{j})} . The proportion of square-free n divisible by k primes of the form 4m + 1 for which the negative Pell's equation is solvable is at least α. When the number of prime divisors is not fixed, the proportion is given by 1 − α. If the negative Pell's equation does have a solution for a particular n, its fundamental solution leads to the fundamental one for the positive case by squaring both sides of the defining equation: ( x 2 − n y 2 ) 2 = ( − 1 ) 2 {\displaystyle (x^{2}-ny^{2})^{2}=(-1)^{2}} implies > ( x 2 + n y 2 ) 2 − n ( 2 x y ) 2 = 1. {\displaystyle >(x^{2}+ny^{2})^{2}-n(2xy)^{2}=1.} As stated above, if the negative Pell's equation is solvable, a solution can be found using the method of continued fractions as in the positive Pell's equation. The recursion relation works slightly differently however. Since ( x + y n ) ( x − y n ) = − 1 {\displaystyle (x+y{\sqrt {n}})(x-y{\sqrt {n}})=-1} , the next solution is determined in terms of i ( x k + y k n ) = ( i ( x + y n ) ) k {\displaystyle i(x_{k}+y_{k}{\sqrt {n}})=(i(x+y{\sqrt {n}}))^{k}} whenever there is a match, that is, when k {\displaystyle k} is odd. The resulting recursion relation is (modulo a minus sign, which is immaterial due to the quadratic nature of the equation) x k = x k − 2 x 1 2 + n x k − 2 y 1 2 + 2 n y k − 2 y 1 x 1 , {\displaystyle x_{k}=x_{k-2}x_{1}^{2}+nx_{k-2}y_{1}^{2}+2ny_{k-2}y_{1}x_{1},} y k = y k − 2 x 1 2 + n y k − 2 y 1 2 + 2 x k − 2 y 1 x 1 , {\displaystyle y_{k}=y_{k-2}x_{1}^{2}+ny_{k-2}y_{1}^{2}+2x_{k-2}y_{1}x_{1},} which gives an infinite tower of solutions to the negative Pell's equation (except for n = 1 {\displaystyle n=1} ). == Generalized Pell's equation == The equation x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} is called the generalized (or general) Pell's equation. The equation u 2 − n v 2 = 1 {\displaystyle u^{2}-nv^{2}=1} is the corresponding Pell's resolvent. A recursive algorithm was given by Lagrange in 1768 for solving the equation, reducing the problem to the case | N | < n {\displaystyle |N|<{\sqrt {n}}} . Such solutions can be derived using the continued-fractions method as outlined above. If ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} is a solution to x 2 − n y 2 = N , {\displaystyle x^{2}-ny^{2}=N,} and ( u k , v k ) {\displaystyle (u_{k},v_{k})} is a solution to u 2 − n v 2 = 1 , {\displaystyle u^{2}-nv^{2}=1,} then ( x k , y k ) {\displaystyle (x_{k},y_{k})} such that x k + y k n = ( x 0 + y 0 n ) ( u k + v k n ) {\displaystyle x_{k}+y_{k}{\sqrt {n}}={\big (}x_{0}+y_{0}{\sqrt {n}}{\big )}{\big (}u_{k}+v_{k}{\sqrt {n}}{\big )}} is a solution to x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} , a principle named the multiplicative principle. The solution ( x k , y k ) {\displaystyle (x_{k},y_{k})} is called a Pell multiple of the solution ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} . There exists a finite set of solutions to x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} such that every solution is a Pell multiple of a solution from that set. In particular, if ( u , v ) {\displaystyle (u,v)} is the fundamental solution to u 2 − n v 2 = 1 {\displaystyle u^{2}-nv^{2}=1} , then each solution to the equation is a Pell multiple of a solution ( x , y ) {\displaystyle (x,y)} with | x | ≤ 1 2 | N | ( | U | + 1 ) {\displaystyle |x|\leq {\tfrac {1}{2}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)} and | y | ≤ 1 2 n | N | ( | U | + 1 ) {\displaystyle |y|\leq {\tfrac {1}{2{\sqrt {n}}}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)} , where U = u + v n {\displaystyle U=u+v{\sqrt {n}}} . If x and y are positive integer solutions to the Pell's equation with | N | < n {\displaystyle |N|<{\sqrt {n}}} , then x / y {\displaystyle x/y} is a convergent to the continued fraction of n {\displaystyle {\sqrt {n}}} . Solutions to the generalized Pell's equation are used for solving certain Diophantine equations and units of certain rings, and they arise in the study of SIC-POVMs in quantum information theory. The equation x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} is similar to the resolvent x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} in that if a minimal solution to x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} can be found, then all solutions of the equation can be generated in a similar manner to the case N = 1 {\displaystyle N=1} . For certain n {\displaystyle n} , solutions to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} can be generated from those with x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} , in that if n ≡ 5 ( mod 8 ) , {\displaystyle n\equiv 5{\pmod {8}},} then every third solution to x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} has x , y {\displaystyle x,y} even, generating a solution to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} . == Notes == == References == == Further reading == Edwards, Harold M. (1996) [1977]. Fermat's Last Theorem: A Genetic Introduction to Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 50. Springer-Verlag. ISBN 0-387-90230-9. MR 0616635. Pinch, R. G. E. (1988). "Simultaneous Pellian equations". Mathematical Proceedings of the Cambridge Philosophical Society. 103 (1): 35–46. Bibcode:1988MPCPS.103...35P. doi:10.1017/S0305004100064598. S2CID 123098216. Whitford, Edward Everett (1912). The Pell equation (PhD Thesis). Columbia University. Williams, H. C. (2002). "Solving the Pell equation". In Bennett, M. A.; Berndt, B. C.; Boston, N.; Diamond, H. G.; Hildebrand, A. J.; Philipp, W. (eds.). Surveys in number theory: Papers from the millennial conference on number theory. Natick, MA: A K Peters. pp. 325–363. ISBN 1-56881-162-4. Zbl 1043.11027. Andreescu, Titu; Andrica, Dorin (2015). Quadratic Diophantine Equations. New York: Springer. ISBN 978-0-387-35156-8. OCLC 916486370. Jacobson, Michael; Williams, Hugh (2008). Solving the Pell Equation. New York, NY: Springer-Verlag. ISBN 978-0-387-84922-5. OCLC 245561348. == External links == Weisstein, Eric W. "Pell's equation". MathWorld. O'Connor, John J.; Robertson, Edmund F., "Pell's equation", MacTutor History of Mathematics Archive, University of St Andrews Pell equation solver (n has no upper limit) Pell equation solver (n < 1010, can also return the solution to x2 − ny2 = ±1, ±2, ±3, and ±4)
Wikipedia/Pell_equation
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates the sum of two or more unknowns, with coefficients, to a constant. An exponential Diophantine equation is one in which unknowns can appear in exponents. Diophantine problems have fewer equations than unknowns and involve finding integers that solve all equations simultaneously. Because such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations, beyond the case of linear and quadratic equations, was an achievement of the twentieth century. == Examples == In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants: == Linear Diophantine equations == === One equation === The simplest linear Diophantine equation takes the form a x + b y = c , {\displaystyle ax+by=c,} where a, b and c are given integers. The solutions are described by the following theorem: This Diophantine equation has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + kv, y − ku), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b. Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that ae + bf = d. If c is a multiple of d, then c = dh for some integer h, and (eh, fh) is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides ax + by. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution (x, y), we have a ( x + k v ) + b ( y − k u ) = a x + b y + k ( a v − b u ) = a x + b y + k ( u d v − v d u ) = a x + b y , {\displaystyle {\begin{aligned}a(x+kv)+b(y-ku)&=ax+by+k(av-bu)\\&=ax+by+k(udv-vdu)\\&=ax+by,\end{aligned}}} showing that (x + kv, y − ku) is another solution. Finally, given two solutions such that a x 1 + b y 1 = a x 2 + b y 2 = c , {\displaystyle ax_{1}+by_{1}=ax_{2}+by_{2}=c,} one deduces that u ( x 2 − x 1 ) + v ( y 2 − y 1 ) = 0. {\displaystyle u(x_{2}-x_{1})+v(y_{2}-y_{1})=0.} As u and v are coprime, Euclid's lemma shows that v divides x2 − x1, and thus that there exists an integer k such that both x 2 − x 1 = k v , y 2 − y 1 = − k u . {\displaystyle x_{2}-x_{1}=kv,\quad y_{2}-y_{1}=-ku.} Therefore, x 2 = x 1 + k v , y 2 = y 1 − k u , {\displaystyle x_{2}=x_{1}+kv,\quad y_{2}=y_{1}-ku,} which completes the proof. === Chinese remainder theorem === The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let n 1 , … , n k {\displaystyle n_{1},\dots ,n_{k}} be k pairwise coprime integers greater than one, a 1 , … , a k {\displaystyle a_{1},\dots ,a_{k}} be k arbitrary integers, and N be the product n 1 ⋯ n k . {\displaystyle n_{1}\cdots n_{k}.} The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution ( x , x 1 , … , x k ) {\displaystyle (x,x_{1},\dots ,x_{k})} such that 0 ≤ x < N, and that the other solutions are obtained by adding to x a multiple of N: x = a 1 + n 1 x 1 ⋮ x = a k + n k x k {\displaystyle {\begin{aligned}x&=a_{1}+n_{1}\,x_{1}\\&\;\;\vdots \\x&=a_{k}+n_{k}\,x_{k}\end{aligned}}} === System of linear Diophantine equations === More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written A X = C , {\displaystyle AX=C,} where A is an m × n matrix of integers, X is an n × 1 column matrix of unknowns and C is an m × 1 column matrix of integers. The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions m × m and n × n, such that the matrix B = [ b i , j ] = U A V {\displaystyle B=[b_{i,j}]=UAV} is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as B ( V − 1 X ) = U C . {\displaystyle B(V^{-1}X)=UC.} Calling yi the entries of V−1X and di those of D = UC, this leads to the system b i , i y i = d i , 1 ≤ i ≤ k 0 y i = d i , k < i ≤ n . {\displaystyle {\begin{aligned}&b_{i,i}y_{i}=d_{i},\quad 1\leq i\leq k\\&0y_{i}=d_{i},\quad k<i\leq n.\end{aligned}}} This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if x = Vy for some column matrix of integers y such that By = D. It follows that the system has a solution if and only if bi,i divides di for i ≤ k and di = 0 for i > k. If this condition is fulfilled, the solutions of the given system are V [ d 1 b 1 , 1 ⋮ d k b k , k h k + 1 ⋮ h n ] , {\displaystyle V\,{\begin{bmatrix}{\frac {d_{1}}{b_{1,1}}}\\\vdots \\{\frac {d_{k}}{b_{k,k}}}\\h_{k+1}\\\vdots \\h_{n}\end{bmatrix}}\,,} where hk+1, …, hn are arbitrary integers. Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form." Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations. == Homogeneous equations == A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem x d + y d − z d = 0. {\displaystyle x^{d}+y^{d}-z^{d}=0.} As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension n − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface. Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for d > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved. For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem). For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation. === Degree two === Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced. For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation x 2 + y 2 = 3 z 2 , {\displaystyle x^{2}+y^{2}=3z^{2},} does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius 3 {\displaystyle {\sqrt {3}}} , centered at the origin. More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist. If a non-trivial integer solution is known, one may produce all other solutions in the following way. ==== Geometric interpretation ==== Let Q ( x 1 , … , x n ) = 0 {\displaystyle Q(x_{1},\ldots ,x_{n})=0} be a homogeneous Diophantine equation, where Q ( x 1 , … , x n ) {\displaystyle Q(x_{1},\ldots ,x_{n})} is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all x i {\displaystyle x_{i}} are zero. If ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} is a non-trivial integer solution of this equation, then ( a 1 , … , a n ) {\displaystyle \left(a_{1},\ldots ,a_{n}\right)} are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if ( p 1 q , … , p n q ) {\textstyle \left({\frac {p_{1}}{q}},\ldots ,{\frac {p_{n}}{q}}\right)} are homogeneous coordinates of a rational point of this hypersurface, where q , p 1 , … , p n {\displaystyle q,p_{1},\ldots ,p_{n}} are integers, then ( p 1 , … , p n ) {\displaystyle \left(p_{1},\ldots ,p_{n}\right)} is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form ( k p 1 d , … , k p n d ) , {\displaystyle \left(k{\frac {p_{1}}{d}},\ldots ,k{\frac {p_{n}}{d}}\right),} where k is any integer, and d is the greatest common divisor of the p i . {\displaystyle p_{i}.} It follows that solving the Diophantine equation Q ( x 1 , … , x n ) = 0 {\displaystyle Q(x_{1},\ldots ,x_{n})=0} is completely reduced to finding the rational points of the corresponding projective hypersurface. ==== Parameterization ==== Let now A = ( a 1 , … , a n ) {\displaystyle A=\left(a_{1},\ldots ,a_{n}\right)} be an integer solution of the equation Q ( x 1 , … , x n ) = 0. {\displaystyle Q(x_{1},\ldots ,x_{n})=0.} As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters. More precisely, one may proceed as follows. By permuting the indices, one may suppose, without loss of generality that a n ≠ 0. {\displaystyle a_{n}\neq 0.} Then one may pass to the affine case by considering the affine hypersurface defined by q ( x 1 , … , x n − 1 ) = Q ( x 1 , … , x n − 1 , 1 ) , {\displaystyle q(x_{1},\ldots ,x_{n-1})=Q(x_{1},\ldots ,x_{n-1},1),} which has the rational point R = ( r 1 , … , r n − 1 ) = ( a 1 a n , … , a n − 1 a n ) . {\displaystyle R=(r_{1},\ldots ,r_{n-1})=\left({\frac {a_{1}}{a_{n}}},\ldots ,{\frac {a_{n-1}}{a_{n}}}\right).} If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables y i = x i − r i {\displaystyle y_{i}=x_{i}-r_{i}} does not change the rational points, and transforms q into a homogeneous polynomial in n − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables. If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case. In the general case, consider the parametric equation of a line passing through R: x 2 = r 2 + t 2 ( x 1 − r 1 ) ⋮ x n − 1 = r n − 1 + t n − 1 ( x 1 − r 1 ) . {\displaystyle {\begin{aligned}x_{2}&=r_{2}+t_{2}(x_{1}-r_{1})\\&\;\;\vdots \\x_{n-1}&=r_{n-1}+t_{n-1}(x_{1}-r_{1}).\end{aligned}}} Substituting this in q, one gets a polynomial of degree two in x1, that is zero for x1 = r1. It is thus divisible by x1 − r1. The quotient is linear in x1, and may be solved for expressing x1 as a quotient of two polynomials of degree at most two in t 2 , … , t n − 1 , {\displaystyle t_{2},\ldots ,t_{n-1},} with integer coefficients: x 1 = f 1 ( t 2 , … , t n − 1 ) f n ( t 2 , … , t n − 1 ) . {\displaystyle x_{1}={\frac {f_{1}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}}.} Substituting this in the expressions for x 2 , … , x n − 1 , {\displaystyle x_{2},\ldots ,x_{n-1},} one gets, for i = 1, …, n − 1, x i = f i ( t 2 , … , t n − 1 ) f n ( t 2 , … , t n − 1 ) , {\displaystyle x_{i}={\frac {f_{i}(t_{2},\ldots ,t_{n-1})}{f_{n}(t_{2},\ldots ,t_{n-1})}},} where f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} are polynomials of degree at most two with integer coefficients. Then, one can return to the homogeneous case. Let, for i = 1, …, n, F i ( t 1 , … , t n − 1 ) = t 1 2 f i ( t 2 t 1 , … , t n − 1 t 1 ) , {\displaystyle F_{i}(t_{1},\ldots ,t_{n-1})=t_{1}^{2}f_{i}\left({\frac {t_{2}}{t_{1}}},\ldots ,{\frac {t_{n-1}}{t_{1}}}\right),} be the homogenization of f i . {\displaystyle f_{i}.} These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q: x 1 = F 1 ( t 1 , … , t n − 1 ) ⋮ x n = F n ( t 1 , … , t n − 1 ) . {\displaystyle {\begin{aligned}x_{1}&=F_{1}(t_{1},\ldots ,t_{n-1})\\&\;\;\vdots \\x_{n}&=F_{n}(t_{1},\ldots ,t_{n-1}).\end{aligned}}} A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of t 1 , … , t n − 1 . {\displaystyle t_{1},\ldots ,t_{n-1}.} As F 1 , … , F n {\displaystyle F_{1},\ldots ,F_{n}} are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} where, for i = 1, ..., n, x i = k F i ( t 1 , … , t n − 1 ) d , {\displaystyle x_{i}=k\,{\frac {F_{i}(t_{1},\ldots ,t_{n-1})}{d}},} where k is an integer, t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers, and d is the greatest common divisor of the n integers F i ( t 1 , … , t n − 1 ) . {\displaystyle F_{i}(t_{1},\ldots ,t_{n-1}).} One could hope that the coprimality of the ti, could imply that d = 1. Unfortunately this is not the case, as shown in the next section. ==== Example of Pythagorean triples ==== The equation x 2 + y 2 − z 2 = 0 {\displaystyle x^{2}+y^{2}-z^{2}=0} is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples. For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope: y = t ( x + 1 ) . {\displaystyle y=t(x+1).} Putting this in the circle equation x 2 + y 2 − 1 = 0 , {\displaystyle x^{2}+y^{2}-1=0,} one gets x 2 − 1 + t 2 ( x + 1 ) 2 = 0. {\displaystyle x^{2}-1+t^{2}(x+1)^{2}=0.} Dividing by x + 1, results in x − 1 + t 2 ( x + 1 ) = 0 , {\displaystyle x-1+t^{2}(x+1)=0,} which is easy to solve in x: x = 1 − t 2 1 + t 2 . {\displaystyle x={\frac {1-t^{2}}{1+t^{2}}}.} It follows y = t ( x + 1 ) = 2 t 1 + t 2 . {\displaystyle y=t(x+1)={\frac {2t}{1+t^{2}}}.} Homogenizing as described above one gets all solutions as x = k s 2 − t 2 d y = k 2 s t d z = k s 2 + t 2 d , {\displaystyle {\begin{aligned}x&=k\,{\frac {s^{2}-t^{2}}{d}}\\y&=k\,{\frac {2st}{d}}\\z&=k\,{\frac {s^{2}+t^{2}}{d}},\end{aligned}}} where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, d = 2 if s and t are both odd, and d = 1 if one is odd and the other is even. The primitive triples are the solutions where k = 1 and s > t > 0. This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y, == Diophantine analysis == === Typical questions === The questions asked in Diophantine analysis include: Are there any solutions? Are there any solutions beyond some that are easily found by inspection? Are there finitely or infinitely many solutions? Can all solutions be found in theory? Can one in practice compute a full list of solutions? These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles. === Typical problem === The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10A + B = 2(10B + A) − 1, thus 19B − 8A = 1. Inspection gives the result A = 7, B = 3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10. Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts. === 17th and 18th centuries === In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation an + bn = cn has no solutions for any n higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles. In 1657, Fermat attempted to solve the Diophantine equation 61x2 + 1 = y2 (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is x = 226153980, y = 1766319049 (see Chakravala method). === Hilbert's tenth problem === In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist. === Diophantine geometry === Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is not algebraically closed. === Modern research === The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations. The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist. During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates. This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations. === Infinite Diophantine equations === An example of an infinite Diophantine equation is: n = a 2 + 2 b 2 + 3 c 2 + 4 d 2 + 5 e 2 + ⋯ , {\displaystyle n=a^{2}+2b^{2}+3c^{2}+4d^{2}+5e^{2}+\cdots ,} which can be expressed as "How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to: n = a 2 + 4 b 2 + 9 c 2 + 16 d 2 + 25 e 2 + ⋯ , {\displaystyle n=a^{2}+4b^{2}+9c^{2}+16d^{2}+25e^{2}+\cdots ,} which does not always have a solution for positive n. == Exponential Diophantine equations == If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include: the Ramanujan–Nagell equation, 2n − 7 = x2 the equation of the Fermat–Catalan conjecture and Beal's conjecture, am + bn = ck with inequality restrictions on the exponents the Erdős–Moser equation, 1k + 2k + ⋯ + (m − 1)k = mk A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error. == See also == Kuṭṭaka, Aryabhata's algorithm for solving linear Diophantine equations in two unknowns == Notes == == References == Mordell, L. J. (1969). Diophantine equations. Pure and Applied Mathematics. Vol. 30. Academic Press. ISBN 0-12-506250-8. Zbl 0188.34503. Schmidt, Wolfgang M. (1991). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467. Berlin: Springer-Verlag. ISBN 3-540-54058-X. Zbl 0754.11020. Shorey, T. N.; Tijdeman, R. (1986). Exponential Diophantine equations. Cambridge Tracts in Mathematics. Vol. 87. Cambridge University Press. ISBN 0-521-26826-5. Zbl 0606.10011. Smart, Nigel P. (1998). The algorithmic resolution of Diophantine equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. ISBN 0-521-64156-X. Zbl 0907.11001. Stillwell, John (2004). Mathematics and its History (Second ed.). Springer Science + Business Media Inc. ISBN 0-387-95336-1. == Further reading == Bachmakova, Isabelle (1966). "Diophante et Fermat". Revue d'Histoire des Sciences et de Leurs Applications. 19 (4): 289–306. doi:10.3406/rhs.1966.2507. JSTOR 23905707. Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997. Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré" Historia Mathematica 8 (1981), 393–416. Bashmakova, Izabella G., Slavutin, E. I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian]. Bashmakova, Izabella G. "Diophantine Equations and the Evolution of Algebra", American Mathematical Society Translations 147 (2), 1990, pp. 85–100. Translated by A. Shenitzer and H. Grant. Dickson, Leonard Eugene (2005) [1920]. History of the Theory of Numbers. Volume II: Diophantine analysis. Mineola, NY: Dover Publications. ISBN 978-0-486-44233-4. MR 0245500. Zbl 1214.11002. Bogdan Grechuk (2024). Polynomial Diophantine Equations: A Systematic Approach, Springer. Rashed, Roshdi; Houzel, Christian (2013). Les "Arithmétiques" de Diophante. doi:10.1515/9783110336481. ISBN 978-3-11-033593-4. Rashed, Roshdi, Histoire de l'analyse diophantienne classique : D'Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. == External links == Diophantine Equation. From MathWorld at Wolfram Research. "Diophantine equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Dario Alpern's Online Calculator. Retrieved 18 March 2009
Wikipedia/Linear_Diophantine_equation
Pell's equation, also called the Pell–Fermat equation, is any Diophantine equation of the form x 2 − n y 2 = 1 , {\displaystyle x^{2}-ny^{2}=1,} where n is a given positive nonsquare integer, and integer solutions are sought for x and y. In Cartesian coordinates, the equation is represented by a hyperbola; solutions occur wherever the curve passes through a point whose x and y coordinates are both integers, such as the trivial solution with x = 1 and y = 0. Joseph Louis Lagrange proved that, as long as n is not a perfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accurately approximate the square root of n by rational numbers of the form x/y. This equation was first studied extensively in India starting with Brahmagupta, who found an integer solution to 92 x 2 + 1 = y 2 {\displaystyle 92x^{2}+1=y^{2}} in his Brāhmasphuṭasiddhānta circa 628. Bhaskara II in the 12th century and Narayana Pandit in the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing the chakravala method, building on the work of Jayadeva and Brahmagupta. Solutions to specific examples of Pell's equation, such as the Pell numbers arising from the equation with n = 2, had been known for much longer, since the time of Pythagoras in Greece and a similar date in India. William Brouncker was the first European to solve Pell's equation. The name of Pell's equation arose from Leonhard Euler mistakenly attributing Brouncker's solution of the equation to John Pell. == History == As early as 400 BC in India and Greece, mathematicians studied the numbers arising from the n = 2 case of Pell's equation, x 2 − 2 y 2 = 1 , {\displaystyle x^{2}-2y^{2}=1,} and from the closely related equation x 2 − 2 y 2 = − 1 {\displaystyle x^{2}-2y^{2}=-1} because of the connection of these equations to the square root of 2. Indeed, if x and y are positive integers satisfying this equation, then x/y is an approximation of √2. The numbers x and y appearing in these approximations, called side and diameter numbers, were known to the Pythagoreans, and Proclus observed that in the opposite direction these numbers obeyed one of these two equations. Similarly, Baudhayana discovered that x = 17, y = 12 and x = 577, y = 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2. Later, Archimedes approximated the square root of 3 by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation. Likewise, Archimedes's cattle problem — an ancient word problem about finding the number of cattle belonging to the sun god Helios — can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter to Eratosthenes, and the attribution to Archimedes is generally accepted today. Around AD 250, Diophantus considered the equation a 2 x 2 + c = y 2 , {\displaystyle a^{2}x^{2}+c=y^{2},} where a and c are fixed numbers, and x and y are the variables to be solved for. This equation is different in form from Pell's equation but equivalent to it. Diophantus solved the equation for (a, c) equal to (1, 1), (1, −1), (1, 12), and (3, 9). Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus. In Indian mathematics, Brahmagupta discovered that ( x 1 2 − N y 1 2 ) ( x 2 2 − N y 2 2 ) = ( x 1 x 2 + N y 1 y 2 ) 2 − N ( x 1 y 2 + x 2 y 1 ) 2 , {\displaystyle (x_{1}^{2}-Ny_{1}^{2})(x_{2}^{2}-Ny_{2}^{2})=(x_{1}x_{2}+Ny_{1}y_{2})^{2}-N(x_{1}y_{2}+x_{2}y_{1})^{2},} a form of what is now known as Brahmagupta's identity. Using this, he was able to "compose" triples ( x 1 , y 1 , k 1 ) {\displaystyle (x_{1},y_{1},k_{1})} and ( x 2 , y 2 , k 2 ) {\displaystyle (x_{2},y_{2},k_{2})} that were solutions of x 2 − N y 2 = k {\displaystyle x^{2}-Ny^{2}=k} , to generate the new triples ( x 1 x 2 + N y 1 y 2 , x 1 y 2 + x 2 y 1 , k 1 k 2 ) {\displaystyle (x_{1}x_{2}+Ny_{1}y_{2},x_{1}y_{2}+x_{2}y_{1},k_{1}k_{2})} and ( x 1 x 2 − N y 1 y 2 , x 1 y 2 − x 2 y 1 , k 1 k 2 ) . {\displaystyle (x_{1}x_{2}-Ny_{1}y_{2},x_{1}y_{2}-x_{2}y_{1},k_{1}k_{2}).} Not only did this give a way to generate infinitely many solutions to x 2 − N y 2 = 1 {\displaystyle x^{2}-Ny^{2}=1} starting with one solution, but also, by dividing such a composition by k 1 k 2 {\displaystyle k_{1}k_{2}} , integer or "nearly integer" solutions could often be obtained. For instance, for N = 92 {\displaystyle N=92} , Brahmagupta composed the triple (10, 1, 8) (since 10 2 − 92 ( 1 2 ) = 8 {\displaystyle 10^{2}-92(1^{2})=8} ) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" for x {\displaystyle x} and y {\displaystyle y} ) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution of x 2 − N y 2 = k {\displaystyle x^{2}-Ny^{2}=k} for k = ±1, ±2, or ±4. The first general method for solving the Pell's equation (for all N) was given by Bhāskara II in 1150, extending the methods of Brahmagupta. Called the chakravala (cyclic) method, it starts by choosing two relatively prime integers a {\displaystyle a} and b {\displaystyle b} , then composing the triple ( a , b , k ) {\displaystyle (a,b,k)} (that is, one which satisfies a 2 − N b 2 = k {\displaystyle a^{2}-Nb^{2}=k} ) with the trivial triple ( m , 1 , m 2 − N ) {\displaystyle (m,1,m^{2}-N)} to get the triple ( a m + N b , a + b m , k ( m 2 − N ) ) {\displaystyle {\big (}am+Nb,a+bm,k(m^{2}-N){\big )}} , which can be scaled down to ( a m + N b k , a + b m k , m 2 − N k ) . {\displaystyle \left({\frac {am+Nb}{k}},{\frac {a+bm}{k}},{\frac {m^{2}-N}{k}}\right).} When m {\displaystyle m} is chosen so that a + b m k {\displaystyle {\frac {a+bm}{k}}} is an integer, so are the other two numbers in the triple. Among such m {\displaystyle m} , the method chooses one that minimizes m 2 − N k {\displaystyle {\frac {m^{2}-N}{k}}} and repeats the process. This method always terminates with a solution. Bhaskara used it to give the solution x = 1766319049, y = 226153980 to the N = 61 case. Several European mathematicians rediscovered how to solve Pell's equation in the 17th century. Pierre de Fermat found how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians. In a letter to Kenelm Digby, Bernard Frénicle de Bessy said that Fermat found the smallest solution for N up to 150 and challenged John Wallis to solve the cases N = 151 or 313. Both Wallis and William Brouncker gave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker. John Pell's connection with the equation is that he revised Thomas Branker's translation of Johann Rahn's 1659 book Teutsche Algebra into English, with a discussion of Brouncker's solution of the equation. Leonhard Euler mistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell. The general theory of Pell's equation, based on continued fractions and algebraic manipulations with numbers of the form P + Q a , {\displaystyle P+Q{\sqrt {a}},} was developed by Lagrange in 1766–1769. In particular, Lagrange gave a proof that the Brouncker–Wallis algorithm always terminates. == Solutions == === Fundamental solution via continued fractions === Let h i / k i {\displaystyle h_{i}/k_{i}} denote the unique sequence of convergents of the regular continued fraction for n {\displaystyle {\sqrt {n}}} . Then the pair of positive integers ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. The sequence of integers [ a 0 ; a 1 , a 2 , … ] {\displaystyle [a_{0};a_{1},a_{2},\ldots ]} in the regular continued fraction of n {\displaystyle {\sqrt {n}}} is always eventually periodic. It can be written in the form [ ⌊ n ⌋ ; a 1 , a 2 , … , a r − 1 , 2 ⌊ n ⌋ ¯ ] {\displaystyle \left[\lfloor {\sqrt {n}}\rfloor ;\;{\overline {a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }}\right]} , where ⌊ ⋅ ⌋ {\displaystyle \lfloor \,\cdot \,\rfloor } denotes integer floor, and the sequence a 1 , a 2 , … , a r − 1 , 2 ⌊ n ⌋ {\displaystyle a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor } repeats infinitely. Moreover, the tuple ( a 1 , a 2 , … , a r − 1 ) {\displaystyle (a_{1},a_{2},\ldots ,a_{r-1})} is palindromic, the same left-to-right or right-to-left. The fundamental solution is ( x 1 , y 1 ) = { ( h r − 1 , k r − 1 ) , for r even ( h 2 r − 1 , k 2 r − 1 ) , for r odd {\displaystyle (x_{1},y_{1})={\begin{cases}(h_{r-1},k_{r-1}),&{\text{ for }}r{\text{ even}}\\(h_{2r-1},k_{2r-1}),&{\text{ for }}r{\text{ odd}}\end{cases}}} The computation time for finding the fundamental solution using the continued fraction method, with the aid of the Schönhage–Strassen algorithm for fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} . However, this is not a polynomial-time algorithm because the number of digits in the solution may be as large as √n, far larger than a polynomial in the number of digits in the input value n. === Additional solutions from the fundamental solution === Once the fundamental solution is found, all remaining solutions may be calculated algebraically from x k + y k n = ( x 1 + y 1 n ) k , {\displaystyle x_{k}+y_{k}{\sqrt {n}}=(x_{1}+y_{1}{\sqrt {n}})^{k},} expanding the right side, equating coefficients of n {\displaystyle {\sqrt {n}}} on both sides, and equating the other terms on both sides. This yields the recurrence relations x k + 1 = x 1 x k + n y 1 y k , {\displaystyle x_{k+1}=x_{1}x_{k}+ny_{1}y_{k},} y k + 1 = x 1 y k + y 1 x k . {\displaystyle y_{k+1}=x_{1}y_{k}+y_{1}x_{k}.} === Concise representation and faster algorithms === Although writing out the fundamental solution (x1, y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the form x 1 + y 1 n = ∏ i = 1 t ( a i + b i n ) c i {\displaystyle x_{1}+y_{1}{\sqrt {n}}=\prod _{i=1}^{t}\left(a_{i}+b_{i}{\sqrt {n}}\right)^{c_{i}}} using much smaller integers ai, bi, and ci. For instance, Archimedes' cattle problem is equivalent to the Pell equation x 2 − 410 286 423 278 424 y 2 = 1 {\displaystyle x^{2}-410\,286\,423\,278\,424\ y^{2}=1} , the fundamental solution of which has 206545 digits if written out explicitly. However, the solution is also equal to x 1 + y 1 n = u 2329 , {\displaystyle x_{1}+y_{1}{\sqrt {n}}=u^{2329},} where u = x 1 ′ + y 1 ′ 4 729 494 = ( 300 426 607 914 281 713 365 609 + 84 129 507 677 858 393 258 7766 ) 2 {\displaystyle u=x'_{1}+y'_{1}{\sqrt {4\,729\,494}}=(300\,426\,607\,914\,281\,713\,365\ {\sqrt {609}}+84\,129\,507\,677\,858\,393\,258\ {\sqrt {7766}})^{2}} and x 1 ′ {\displaystyle x'_{1}} and y 1 ′ {\displaystyle y'_{1}} only have 45 and 41 decimal digits respectively. Methods related to the quadratic sieve approach for integer factorization may be used to collect relations between prime numbers in the number field generated by √n and to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of the generalized Riemann hypothesis, it can be shown to take time exp ⁡ O ( log ⁡ N ⋅ log ⁡ log ⁡ N ) , {\displaystyle \exp O\left({\sqrt {\log N\cdot \log \log N}}\right),} where N = log n is the input size, similarly to the quadratic sieve. === Quantum algorithms === Hallgren showed that a quantum computer can find a product representation, as described above, for the solution to Pell's equation in polynomial time. Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a real quadratic number field, was extended to more general fields by Schmidt and Völlmer. == Example == As an example, consider the instance of Pell's equation for n = 7; that is, x 2 − 7 y 2 = 1. {\displaystyle x^{2}-7y^{2}=1.} The continued fraction of 7 {\displaystyle {\sqrt {7}}} has the form [ 2 ; 1 , 1 , 1 , 4 ¯ ] {\displaystyle [2;\ {\overline {1,1,1,4}}]} . Since the period has length 4 {\displaystyle 4} , which is an even number, the convergent producing the fundamental solution is obtained by truncating the continued fraction right before the end of the first occurrence of the period: [ 2 ; 1 , 1 , 1 ] = 8 3 {\displaystyle [2;\ 1,1,1]={\frac {8}{3}}} . The sequence of convergents for the square root of seven are Applying the recurrence formula to this solution generates the infinite sequence of solutions (1, 0); (8, 3); (127, 48); (2024, 765); (32257, 12192); (514088, 194307); (8193151, 3096720); (130576328, 49353213); ... (sequence A001081 (x) and A001080 (y) in OEIS) For the Pell's equation x 2 − 13 y 2 = 1 , {\displaystyle x^{2}-13y^{2}=1,} the continued fraction 13 = [ 3 ; 1 , 1 , 1 , 1 , 6 ¯ ] {\displaystyle {\sqrt {13}}=[3;\ {\overline {1,1,1,1,6}}]} has a period of odd length. For this the fundamental solution is obtained by truncating the continued fraction right before the second occurrence of the period [ 3 ; 1 , 1 , 1 , 1 , 6 , 1 , 1 , 1 , 1 ] = 649 180 {\displaystyle [3;\ 1,1,1,1,6,1,1,1,1]={\frac {649}{180}}} . Thus, the fundamental solution is ( x 1 , y 1 ) = ( 649 , 180 ) {\displaystyle (x_{1},y_{1})=(649,180)} . The smallest solution can be very large. For example, the smallest solution to x 2 − 313 y 2 = 1 {\displaystyle x^{2}-313y^{2}=1} is (32188120829134849, 1819380158564160), and this is the equation which Frenicle challenged Wallis to solve. Values of n such that the smallest solution of x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} is greater than the smallest solution for any smaller value of n are 1, 2, 5, 10, 13, 29, 46, 53, 61, 109, 181, 277, 397, 409, 421, 541, 661, 1021, 1069, 1381, 1549, 1621, 2389, 3061, 3469, 4621, 4789, 4909, 5581, 6301, 6829, 8269, 8941, 9949, ... (sequence A033316 in the OEIS). (For these records, see OEIS: A033315 for x and OEIS: A033319 for y.) == List of fundamental solutions of Pell's equations == The following is a list of the fundamental solution to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} with n ≤ 128. When n is an integer square, there is no solution except for the trivial solution (1, 0). The values of x are sequence A002350 and those of y are sequence A002349 in OEIS. == Connections == Pell's equation has connections to several other important subjects in mathematics. === Algebraic number theory === Pell's equation is closely related to the theory of algebraic numbers, as the formula x 2 − n y 2 = ( x + y n ) ( x − y n ) {\displaystyle x^{2}-ny^{2}=(x+y{\sqrt {n}})(x-y{\sqrt {n}})} is the norm for the ring Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} and for the closely related quadratic field Q ( n ) {\displaystyle \mathbb {Q} ({\sqrt {n}})} . Thus, a pair of integers ( x , y ) {\displaystyle (x,y)} solves Pell's equation if and only if x + y n {\displaystyle x+y{\sqrt {n}}} is a unit with norm 1 in Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} . Dirichlet's unit theorem, that all units of Z [ n ] {\displaystyle \mathbb {Z} [{\sqrt {n}}]} can be expressed as powers of a single fundamental unit (and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution. The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. === Chebyshev polynomials === Demeyer mentions a connection between Pell's equation and the Chebyshev polynomials: If T i ( x ) {\displaystyle T_{i}(x)} and U i ( x ) {\displaystyle U_{i}(x)} are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in any polynomial ring R [ x ] {\displaystyle R[x]} , with n = x 2 − 1 {\displaystyle n=x^{2}-1} : T i 2 − ( x 2 − 1 ) U i − 1 2 = 1. {\displaystyle T_{i}^{2}-(x^{2}-1)U_{i-1}^{2}=1.} Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution: T i + U i − 1 x 2 − 1 = ( x + x 2 − 1 ) i . {\displaystyle T_{i}+U_{i-1}{\sqrt {x^{2}-1}}=(x+{\sqrt {x^{2}-1}})^{i}.} It may further be observed that if ( x i , y i ) {\displaystyle (x_{i},y_{i})} are the solutions to any integer Pell's equation, then x i = T i ( x 1 ) {\displaystyle x_{i}=T_{i}(x_{1})} and y i = y 1 U i − 1 ( x 1 ) {\displaystyle y_{i}=y_{1}U_{i-1}(x_{1})} . === Continued fractions === A general development of solutions of Pell's equation x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} in terms of continued fractions of n {\displaystyle {\sqrt {n}}} can be presented, as the solutions x and y are approximates to the square root of n and thus are a special case of continued fraction approximations for quadratic irrationals. The relationship to the continued fractions implies that the solutions to Pell's equation form a semigroup subset of the modular group. Thus, for example, if p and q satisfy Pell's equation, then ( p q n q p ) {\displaystyle {\begin{pmatrix}p&q\\nq&p\end{pmatrix}}} is a matrix of unit determinant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: If pk−1/qk−1 and pk/qk are two successive convergents of a continued fraction, then the matrix ( p k − 1 p k q k − 1 q k ) {\displaystyle {\begin{pmatrix}p_{k-1}&p_{k}\\q_{k-1}&q_{k}\end{pmatrix}}} has determinant (−1)k. === Smooth numbers === Størmer's theorem applies Pell equations to find pairs of consecutive smooth numbers, positive integers whose prime factors are all smaller than a given value. As part of this theory, Størmer also investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has a prime factor that does not divide n. == The negative Pell's equation == The negative Pell's equation is given by x 2 − n y 2 = − 1 {\displaystyle x^{2}-ny^{2}=-1} and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. A necessary (but not sufficient) condition for solvability is that n is not divisible by 4 or by a prime of form 4k + 3. Thus, for example, x2 − 3 y2 = −1 is never solvable, but x2 − 5 y2 = −1 may be. The first few numbers n for which x2 − n y2 = −1 is solvable are 1 (with only one trivial solution) and 2, 5, 10, 13, 17, 26, 29, 37, 41, 50, 53, 58, 61, 65, 73, 74, 82, 85, 89, 97, ... (sequence A031396 in the OEIS) with infinitely many solutions. The solutions of the negative Pell's equation for 1 ≤ n ≤ 298 {\displaystyle 1\leq n\leq 298} are: Let α = Π j is odd ( 1 − 2 j ) {\displaystyle \alpha =\Pi _{j{\text{ is odd}}}(1-2^{j})} . The proportion of square-free n divisible by k primes of the form 4m + 1 for which the negative Pell's equation is solvable is at least α. When the number of prime divisors is not fixed, the proportion is given by 1 − α. If the negative Pell's equation does have a solution for a particular n, its fundamental solution leads to the fundamental one for the positive case by squaring both sides of the defining equation: ( x 2 − n y 2 ) 2 = ( − 1 ) 2 {\displaystyle (x^{2}-ny^{2})^{2}=(-1)^{2}} implies > ( x 2 + n y 2 ) 2 − n ( 2 x y ) 2 = 1. {\displaystyle >(x^{2}+ny^{2})^{2}-n(2xy)^{2}=1.} As stated above, if the negative Pell's equation is solvable, a solution can be found using the method of continued fractions as in the positive Pell's equation. The recursion relation works slightly differently however. Since ( x + y n ) ( x − y n ) = − 1 {\displaystyle (x+y{\sqrt {n}})(x-y{\sqrt {n}})=-1} , the next solution is determined in terms of i ( x k + y k n ) = ( i ( x + y n ) ) k {\displaystyle i(x_{k}+y_{k}{\sqrt {n}})=(i(x+y{\sqrt {n}}))^{k}} whenever there is a match, that is, when k {\displaystyle k} is odd. The resulting recursion relation is (modulo a minus sign, which is immaterial due to the quadratic nature of the equation) x k = x k − 2 x 1 2 + n x k − 2 y 1 2 + 2 n y k − 2 y 1 x 1 , {\displaystyle x_{k}=x_{k-2}x_{1}^{2}+nx_{k-2}y_{1}^{2}+2ny_{k-2}y_{1}x_{1},} y k = y k − 2 x 1 2 + n y k − 2 y 1 2 + 2 x k − 2 y 1 x 1 , {\displaystyle y_{k}=y_{k-2}x_{1}^{2}+ny_{k-2}y_{1}^{2}+2x_{k-2}y_{1}x_{1},} which gives an infinite tower of solutions to the negative Pell's equation (except for n = 1 {\displaystyle n=1} ). == Generalized Pell's equation == The equation x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} is called the generalized (or general) Pell's equation. The equation u 2 − n v 2 = 1 {\displaystyle u^{2}-nv^{2}=1} is the corresponding Pell's resolvent. A recursive algorithm was given by Lagrange in 1768 for solving the equation, reducing the problem to the case | N | < n {\displaystyle |N|<{\sqrt {n}}} . Such solutions can be derived using the continued-fractions method as outlined above. If ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} is a solution to x 2 − n y 2 = N , {\displaystyle x^{2}-ny^{2}=N,} and ( u k , v k ) {\displaystyle (u_{k},v_{k})} is a solution to u 2 − n v 2 = 1 , {\displaystyle u^{2}-nv^{2}=1,} then ( x k , y k ) {\displaystyle (x_{k},y_{k})} such that x k + y k n = ( x 0 + y 0 n ) ( u k + v k n ) {\displaystyle x_{k}+y_{k}{\sqrt {n}}={\big (}x_{0}+y_{0}{\sqrt {n}}{\big )}{\big (}u_{k}+v_{k}{\sqrt {n}}{\big )}} is a solution to x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} , a principle named the multiplicative principle. The solution ( x k , y k ) {\displaystyle (x_{k},y_{k})} is called a Pell multiple of the solution ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} . There exists a finite set of solutions to x 2 − n y 2 = N {\displaystyle x^{2}-ny^{2}=N} such that every solution is a Pell multiple of a solution from that set. In particular, if ( u , v ) {\displaystyle (u,v)} is the fundamental solution to u 2 − n v 2 = 1 {\displaystyle u^{2}-nv^{2}=1} , then each solution to the equation is a Pell multiple of a solution ( x , y ) {\displaystyle (x,y)} with | x | ≤ 1 2 | N | ( | U | + 1 ) {\displaystyle |x|\leq {\tfrac {1}{2}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)} and | y | ≤ 1 2 n | N | ( | U | + 1 ) {\displaystyle |y|\leq {\tfrac {1}{2{\sqrt {n}}}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)} , where U = u + v n {\displaystyle U=u+v{\sqrt {n}}} . If x and y are positive integer solutions to the Pell's equation with | N | < n {\displaystyle |N|<{\sqrt {n}}} , then x / y {\displaystyle x/y} is a convergent to the continued fraction of n {\displaystyle {\sqrt {n}}} . Solutions to the generalized Pell's equation are used for solving certain Diophantine equations and units of certain rings, and they arise in the study of SIC-POVMs in quantum information theory. The equation x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} is similar to the resolvent x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} in that if a minimal solution to x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} can be found, then all solutions of the equation can be generated in a similar manner to the case N = 1 {\displaystyle N=1} . For certain n {\displaystyle n} , solutions to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} can be generated from those with x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} , in that if n ≡ 5 ( mod 8 ) , {\displaystyle n\equiv 5{\pmod {8}},} then every third solution to x 2 − n y 2 = 4 {\displaystyle x^{2}-ny^{2}=4} has x , y {\displaystyle x,y} even, generating a solution to x 2 − n y 2 = 1 {\displaystyle x^{2}-ny^{2}=1} . == Notes == == References == == Further reading == Edwards, Harold M. (1996) [1977]. Fermat's Last Theorem: A Genetic Introduction to Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 50. Springer-Verlag. ISBN 0-387-90230-9. MR 0616635. Pinch, R. G. E. (1988). "Simultaneous Pellian equations". Mathematical Proceedings of the Cambridge Philosophical Society. 103 (1): 35–46. Bibcode:1988MPCPS.103...35P. doi:10.1017/S0305004100064598. S2CID 123098216. Whitford, Edward Everett (1912). The Pell equation (PhD Thesis). Columbia University. Williams, H. C. (2002). "Solving the Pell equation". In Bennett, M. A.; Berndt, B. C.; Boston, N.; Diamond, H. G.; Hildebrand, A. J.; Philipp, W. (eds.). Surveys in number theory: Papers from the millennial conference on number theory. Natick, MA: A K Peters. pp. 325–363. ISBN 1-56881-162-4. Zbl 1043.11027. Andreescu, Titu; Andrica, Dorin (2015). Quadratic Diophantine Equations. New York: Springer. ISBN 978-0-387-35156-8. OCLC 916486370. Jacobson, Michael; Williams, Hugh (2008). Solving the Pell Equation. New York, NY: Springer-Verlag. ISBN 978-0-387-84922-5. OCLC 245561348. == External links == Weisstein, Eric W. "Pell's equation". MathWorld. O'Connor, John J.; Robertson, Edmund F., "Pell's equation", MacTutor History of Mathematics Archive, University of St Andrews Pell equation solver (n has no upper limit) Pell equation solver (n < 1010, can also return the solution to x2 − ny2 = ±1, ±2, ±3, and ±4)
Wikipedia/Pell's_equation
Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers. The conjecture has been shown to hold for all integers less than 4×1018 but remains unproven despite considerable effort. == History == === Origins === On 7 June 1742, the Prussian mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture: Goldbach was following the now-abandoned convention of considering 1 to be a prime number, so that a sum of units would be a sum of primes. He then proposed a second conjecture in the margin of his letter, which implies the first: Es scheinet wenigstens, dass eine jede Zahl, die grösser ist als 2, ein aggregatum trium numerorum primorum sey. It seems at least, that every integer greater than 2 can be written as the sum of three primes. Euler replied in a letter dated 30 June 1742 and reminded Goldbach of an earlier conversation they had had ("... so Ew vormals mit mir communicirt haben ..."), in which Goldbach had remarked that the first of those two conjectures would follow from the statement This is in fact equivalent to his second, marginal conjecture. In the letter dated 30 June 1742, Euler stated: Dass ... ein jeder numerus par eine summa duorum primorum sey, halte ich für ein ganz gewisses theorema, ungeachtet ich dasselbe nicht demonstriren kann.That ... every even integer is a sum of two primes, I regard as a completely certain theorem, although I cannot prove it. === Similar conjecture by Descartes === René Descartes wrote that "Every even number can be expressed as the sum of at most three primes." The proposition is similar to, but weaker than, Goldbach's conjecture. Paul Erdős said that "Descartes actually discovered this before Goldbach... but it is better that the conjecture was named for Goldbach because, mathematically speaking, Descartes was infinitely rich and Goldbach was very poor." === Partial results === The strong Goldbach conjecture is much more difficult than the weak Goldbach conjecture, which says that every odd integer greater than 5 is the sum of three primes. Using Vinogradov's method, Nikolai Chudakov, Johannes van der Corput, and Theodor Estermann showed (1937–1938) that almost all even numbers can be written as the sum of two primes (in the sense that the fraction of even numbers up to some N which can be so written tends towards 1 as N increases). In 1930, Lev Schnirelmann proved that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant; see Schnirelmann density. Schnirelmann's constant is the lowest number C with this property. Schnirelmann himself obtained C < 800000. This result was subsequently enhanced by many authors, such as Olivier Ramaré, who in 1995 showed that every even number n ≥ 4 is in fact the sum of at most 6 primes. The best known result currently stems from the proof of the weak Goldbach conjecture by Harald Helfgott, which directly implies that every even number n ≥ 4 is the sum of at most 4 primes. In 1924, Hardy and Littlewood showed under the assumption of the generalized Riemann hypothesis that the number of even numbers up to X violating the Goldbach conjecture is much less than X1⁄2 + c for small c. In 1948, using sieve theory methods, Alfréd Rényi showed that every sufficiently large even number can be written as the sum of a prime and an almost prime with at most K factors. Chen Jingrun showed in 1973 using sieve theory that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes). See Chen's theorem for further information. In 1975, Hugh Lowell Montgomery and Bob Vaughan showed that "most" even numbers are expressible as the sum of two primes. More precisely, they showed that there exist positive constants c and C such that for all sufficiently large numbers N, every even number less than N is the sum of two primes, with at most CN1 − c exceptions. In particular, the set of even integers that are not the sum of two primes has density zero. In 1951, Yuri Linnik proved the existence of a constant K such that every sufficiently large even number is the sum of two primes and at most K powers of 2. János Pintz and Imre Ruzsa found in 2020 that K = 8 works. Assuming the generalized Riemann hypothesis, K = 7 also works, as shown by Roger Heath-Brown and Jan-Christoph Schlage-Puchta in 2002. A proof for the weak conjecture was submitted in 2013 by Harald Helfgott to Annals of Mathematics Studies series. Although the article was accepted, Helfgott decided to undertake the major modifications suggested by the referee. Despite several revisions, Helfgott's proof has not yet appeared in a peer-reviewed publication. The weak conjecture is implied by the strong conjecture, as if n − 3 is a sum of two primes, then n is a sum of three primes. However, the converse implication and thus the strong Goldbach conjecture would remain unproven if Helfgott's proof is correct. === Computational results === For small values of n, the strong Goldbach conjecture (and hence the weak Goldbach conjecture) can be verified directly. For instance, in 1938, Nils Pipping laboriously verified the conjecture up to n = 100000. With the advent of computers, many more values of n have been checked; T. Oliveira e Silva ran a distributed computer search that has verified the conjecture for n ≤ 4×1018 (and double-checked up to 4×1017) as of 2013. One record from this search is that 3325581707333960528 is the smallest number that cannot be written as a sum of two primes where one is smaller than 9781. === In popular culture === Goldbach's Conjecture (Chinese: 哥德巴赫猜想) is the title of the biography of Chinese mathematician and number theorist Chen Jingrun, written by Xu Chi. The conjecture is a central point in the plot of the 1992 novel Uncle Petros and Goldbach's Conjecture by Greek author Apostolos Doxiadis, in the short story "Sixty Million Trillion Combinations" by Isaac Asimov and also in the 2008 mystery novel No One You Know by Michelle Richmond. Goldbach's conjecture is part of the plot of the 2007 Spanish film Fermat's Room. Goldbach's conjecture is featured as the main topic of research of the titular character Marguerite in the 2023 French-Swiss film Marguerite's Theorem. == Formal statement == Each of the three conjectures has a natural analog in terms of the modern definition of a prime, under which 1 is excluded. A modern version of the first conjecture is: A modern version of the marginal conjecture is: And a modern version of Goldbach's older conjecture of which Euler reminded him is: These modern versions might not be entirely equivalent to the corresponding original statements. For example, if there were an even integer N = p + 1 larger than 4, for p a prime, that could not be expressed as the sum of two primes in the modern sense, then it would be a counterexample to the modern version of the third conjecture (without being a counterexample to the original version). The modern version is thus probably stronger (but in order to confirm that, one would have to prove that the first version, freely applied to any positive even integer n, could not possibly rule out the existence of such a specific counterexample N). In any case, the modern statements have the same relationships with each other as the older statements did. That is, the second and third modern statements are equivalent, and either implies the first modern statement. The third modern statement (equivalent to the second) is the form in which the conjecture is usually expressed today. It is also known as the "strong", "even", or "binary" Goldbach conjecture. A weaker form of the second modern statement, known as "Goldbach's weak conjecture", the "odd Goldbach conjecture", or the "ternary Goldbach conjecture", asserts that == Heuristic justification == Statistical considerations that focus on the probabilistic distribution of prime numbers present informal evidence in favour of the conjecture (in both the weak and strong forms) for sufficiently large integers: the greater the integer, the more ways there are available for that number to be represented as the sum of two or three other numbers, and the more "likely" it becomes that at least one of these representations consists entirely of primes. A very crude version of the heuristic probabilistic argument (for the strong form of the Goldbach conjecture) is as follows. The prime number theorem asserts that an integer m selected at random has roughly a ⁠1/ln m⁠ chance of being prime. Thus if n is a large even integer and m is a number between 3 and ⁠n/2⁠, then one might expect the probability of m and n − m simultaneously being prime to be ⁠1/ln m ln(n − m)⁠. If one pursues this heuristic, one might expect the total number of ways to write a large even integer n as the sum of two odd primes to be roughly ∑ m = 3 n 2 1 ln ⁡ m 1 ln ⁡ ( n − m ) ≈ n 2 ( ln ⁡ n ) 2 . {\displaystyle \sum _{m=3}^{\frac {n}{2}}{\frac {1}{\ln m}}{\frac {1}{\ln(n-m)}}\approx {\frac {n}{2(\ln n)^{2}}}.} Since ln n ≪ √n, this quantity goes to infinity as n increases, and one would expect that every large even integer has not just one representation as the sum of two primes, but in fact very many such representations. This heuristic argument is actually somewhat inaccurate because it assumes that the events of m and n − m being prime are statistically independent of each other. For instance, if m is odd, then n − m is also odd, and if m is even, then n − m is even, a non-trivial relation because, besides the number 2, only odd numbers can be prime. Similarly, if n is divisible by 3, and m was already a prime other than 3, then n − m would also be coprime to 3 and thus be slightly more likely to be prime than a general number. Pursuing this type of analysis more carefully, G. H. Hardy and John Edensor Littlewood in 1923 conjectured (as part of their Hardy–Littlewood prime tuple conjecture) that for any fixed c ≥ 2, the number of representations of a large integer n as the sum of c primes n = p1 + ⋯ + pc with p1 ≤ ⋯ ≤ pc should be asymptotically equal to ( ∏ p p γ c , p ( n ) ( p − 1 ) c ) ∫ 2 ≤ x 1 ≤ ⋯ ≤ x c : x 1 + ⋯ + x c = n d x 1 ⋯ d x c − 1 ln ⁡ x 1 ⋯ ln ⁡ x c , {\displaystyle \left(\prod _{p}{\frac {p\gamma _{c,p}(n)}{(p-1)^{c}}}\right)\int _{2\leq x_{1}\leq \cdots \leq x_{c}:x_{1}+\cdots +x_{c}=n}{\frac {dx_{1}\cdots dx_{c-1}}{\ln x_{1}\cdots \ln x_{c}}},} where the product is over all primes p, and γc,p(n) is the number of solutions to the equation n = q1 + ⋯ + qc mod p in modular arithmetic, subject to the constraints q1, …, qc ≠ 0 mod p. This formula has been rigorously proven to be asymptotically valid for c ≥ 3 from the work of Ivan Matveevich Vinogradov, but is still only a conjecture when c = 2. In the latter case, the above formula simplifies to 0 when n is odd, and to 2 Π 2 ( ∏ p ∣ n ; p ≥ 3 p − 1 p − 2 ) ∫ 2 n d x ( ln ⁡ x ) 2 ≈ 2 Π 2 ( ∏ p ∣ n ; p ≥ 3 p − 1 p − 2 ) n ( ln ⁡ n ) 2 {\displaystyle 2\Pi _{2}\left(\prod _{p\mid n;p\geq 3}{\frac {p-1}{p-2}}\right)\int _{2}^{n}{\frac {dx}{(\ln x)^{2}}}\approx 2\Pi _{2}\left(\prod _{p\mid n;p\geq 3}{\frac {p-1}{p-2}}\right){\frac {n}{(\ln n)^{2}}}} when n is even, where Π2 is Hardy–Littlewood's twin prime constant Π 2 := ∏ p p r i m e ≥ 3 ( 1 − 1 ( p − 1 ) 2 ) ≈ 0.66016 18158 46869 57392 78121 10014 … {\displaystyle \Pi _{2}:=\prod _{p\;{\rm {prime}}\geq 3}\left(1-{\frac {1}{(p-1)^{2}}}\right)\approx 0.66016\,18158\,46869\,57392\,78121\,10014\dots } This is sometimes known as the extended Goldbach conjecture. The strong Goldbach conjecture is in fact very similar to the twin prime conjecture, and the two conjectures are believed to be of roughly comparable difficulty. == Goldbach partition function == Two primes that sum to an even integer are called a Goldbach Partition. The Goldbach partition function is the function that associates to each even integer the number of ways it can be decomposed into a sum of two primes. Its graph looks like a comet and is therefore called Goldbach's comet. Goldbach's comet suggests tight upper and lower bounds on the number of representations of an even number as the sum of two primes, and also that the number of these representations depend strongly on the value modulo 3 of the number. == Related problems == Although Goldbach's conjecture implies that every positive integer greater than one can be written as a sum of at most three primes, it is not always possible to find such a sum using a greedy algorithm that uses the largest possible prime at each step. The Pillai sequence tracks the numbers requiring the largest number of primes in their greedy representations. Similar problems to Goldbach's conjecture exist in which primes are replaced by other particular sets of numbers, such as the squares: It was proven by Lagrange that every positive integer is the sum of four squares. See Waring's problem and the related Waring–Goldbach problem on sums of powers of primes. Hardy and Littlewood listed as their Conjecture I: "Every large odd number (n > 5) is the sum of a prime and the double of a prime". This conjecture is known as Lemoine's conjecture and is also called Levy's conjecture. The Goldbach conjecture for practical numbers, a prime-like sequence of integers, was stated by Margenstern in 1984, and proved by Melfi in 1996: every even number is a sum of two practical numbers. Harvey Dubner proposed a strengthening of the Goldbach conjecture that states that every even integer greater than 4208 is the sum of two twin primes (not necessarily belonging to the same pair). Only 34 even integers less than 4208 are not the sum of two twin primes; Dubner has verified computationally that this list is complete up to 2 ⋅ 10 10 . {\displaystyle 2\cdot 10^{10}.} A proof of this stronger conjecture would not only imply Goldbach's conjecture, but also the twin prime conjecture. According to Bertrand's postulate, for every integer n > 1 {\displaystyle n>1} , there is always at least one prime p {\displaystyle p} such that n < p < 2 n . {\displaystyle n<p<2n.} If the postulate were false, there would exist some integer n {\displaystyle n} for which no prime numbers lie between n {\displaystyle n} and 2 n {\displaystyle 2n} , making it impossible to express 2 n {\displaystyle 2n} as a sum of two primes. Goldbach's conjecture is used when studying computational complexity. The connection is made through the Busy Beaver function, where BB(n) is the maximum number of steps taken by any n state Turing machine that halts. There is a 27-state Turing machine that halts if and only if Goldbach's conjecture is false. Hence if BB(27) was known, and the Turing machine did not stop in that number of steps, it would be known to run forever and hence no counterexamples exist (which proves the conjecture true). This is a completely impractical way to settle the conjecture; instead it is used to suggest that BB(27) will be very hard to compute, at least as difficult as settling the Goldbach conjecture. == References == == Further reading == Deshouillers, J.-M.; Effinger, G.; te Riele, H.; Zinoviev, D. (1997). "A complete Vinogradov 3-primes theorem under the Riemann hypothesis" (PDF). Electronic Research Announcements of the American Mathematical Society. 3 (15): 99–104. doi:10.1090/S1079-6762-97-00031-0. Montgomery, H. L.; Vaughan, R. C. (1975). "The exceptional set in Goldbach's problem" (PDF). Acta Arithmetica. 27: 353–370. doi:10.4064/aa-27-1-353-370. Terence Tao proved that all odd numbers are at most the sum of five primes. Goldbach Conjecture at MathWorld. == External links == Media related to Goldbach's conjecture at Wikimedia Commons "Goldbach problem", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Goldbach's original letter to Euler — PDF format (in German and Latin) Goldbach's conjecture, part of Chris Caldwell's Prime Pages. Goldbach conjecture verification, Tomás Oliveira e Silva's distributed computer search.
Wikipedia/Goldbach's_conjecture
An Introduction to the Theory of Numbers is a classic textbook in the field of number theory, by G. H. Hardy and E. M. Wright. It is on the list of 173 books essential for undergraduate math libraries. The book grew out of a series of lectures by Hardy and Wright and was first published in 1938. The third edition added an elementary proof of the prime number theorem, and the sixth edition added a chapter on elliptic curves. == See also == List of important publications in mathematics == References == Bell, E. T. (1939), "Book Review: An Introduction to the Theory of Numbers", Bulletin of the American Mathematical Society, 45 (7): 507–509, doi:10.1090/S0002-9904-1939-07025-0, ISSN 0002-9904 Hardy, Godfrey Harold; Wright, E. M. (1938), An introduction to the theory of numbers. (First ed.), Oxford: Clarendon Press, JFM 64.0093.03, Zbl 0020.29201{{citation}}: CS1 maint: publisher location (link) Hardy, Godfrey Harold; Wright, E. M. (1954) [1938], An introduction to the theory of numbers (Third ed.), Oxford, at the Clarendon Press, MR 0067125 Hardy, Godfrey Harold; Wright, E. M. (1971) [1938], An introduction to the theory of numbers (Fourth ed.), The Clarendon Press Oxford University Press Hardy, Godfrey Harold; Wright, E. M. (1979) [1938], An introduction to the theory of numbers (Fifth ed.), The Clarendon Press Oxford University Press, ISBN 978-0-19-853171-5, MR 0568909 Hardy, Godfrey Harold; Wright, E. M. (2008) [1938], Heath-Brown, D. R.; Silverman, J. H. (eds.), An introduction to the theory of numbers (Sixth ed.), Oxford University Press, ISBN 978-0-19-921986-5, MR 2445243 == Reviews == E. T. Bell (July 1939). "Review: G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers", Bull. Amer. Math. Soc. 45(7): pp. 507–509 Anderson, Ian (2010). "Reviews - An introduction to the theory of numbers (sixth edition), by G. H. Hardy and E. M. Wright. Pp. 620. 2008. £30 (paperback). ISBN: 978-0-19-921986-5 (Oxford University Press)". The Mathematical Gazette. 94 (529): 184–184. doi:10.1017/S0025557200007464. ISSN 0025-5572. Robertson, Edmund; O'Connor, John (July 2020). "Reviews of G H Hardy and E M Wright's Theory of Numbers". MacTutor History of Mathematics Archive. Retrieved 29 May 2025. == Citations ==
Wikipedia/An_Introduction_to_the_Theory_of_Numbers
Proof by exhaustion, also known as proof by cases, proof by case analysis, complete induction or the brute force method, is a method of mathematical proof in which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds. This is a method of direct proof. A proof by exhaustion typically contains two stages: A proof that the set of cases is exhaustive; i.e., that each instance of the statement to be proved matches the conditions of (at least) one of the cases. A proof of each of the cases. The prevalence of digital computers has greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof of four color theorem in 1976), though such approaches can also be challenged on the basis of mathematical elegance. Expert systems can be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results. In the Curry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-style pattern matching. == Example == Proof by exhaustion can be used to prove that if an integer is a perfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9. Proof: Each perfect cube is the cube of some integer n, where n is either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive: Case 1: If n = 3p, then n3 = 27p3, which is a multiple of 9. Case 2: If n = 3p + 1, then n3 = 27p3 + 27p2 + 9p + 1, which is 1 more than a multiple of 9. For instance, if n = 4 then n3 = 64 = 9×7 + 1. Case 3: If n = 3p − 1, then n3 = 27p3 − 27p2 + 9p − 1, which is 1 less than a multiple of 9. For instance, if n = 5 then n3 = 125 = 9×14 − 1. Q.E.D. == Elegance == Mathematicians prefer to avoid proofs by exhaustion with large numbers of cases, which are viewed as inelegant. An illustration as to how such proofs might be inelegant is to look at the following proofs that all modern Summer Olympic Games are held in years which are divisible by 4: Proof: The first modern Summer Olympics were held in 1896, and then every 4 years thereafter (neglecting exceptional situations such as when the games' schedule were disrupted by World War I, World War II and the COVID-19 pandemic.). Since 1896 = 474 × 4 is divisible by 4, the next Olympics would be in year 474 × 4 + 4 = (474 + 1) × 4, which is also divisible by four, and so on (this is a proof by mathematical induction). Therefore, the statement is proved. The statement can also be proved by exhaustion by listing out every year in which the Summer Olympics were held, and checking that every one of them can be divided by four. With 28 total Summer Olympics as of 2016, this is a proof by exhaustion with 28 cases. In addition to being less elegant, the proof by exhaustion will also require an extra case each time a new Summer Olympics is held. This is to be contrasted with the proof by mathematical induction, which proves the statement indefinitely into the future. == Number of cases == There is no upper limit to the number of cases allowed in a proof by exhaustion. Sometimes there are only two or three cases. Sometimes there may be thousands or even millions. For example, rigorously solving a chess endgame puzzle might involve considering a very large number of possible positions in the game tree of that problem. The first proof of the four colour theorem was a proof by exhaustion with 1834 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the four colour theorem today still has over 600 cases. In general the probability of an error in the whole proof increases with the number of cases. A proof with a large number of cases leaves an impression that the theorem is only true by coincidence, and not because of some underlying principle or connection. Other types of proofs—such as proof by induction (mathematical induction)—are considered more elegant. However, there are some important theorems for which no other method of proof has been found, such as The proof that there is no finite projective plane of order 10. The classification of finite simple groups. The Kepler conjecture. The Boolean Pythagorean triples problem. == See also == British Museum algorithm Computer-assisted proof Enumerative induction Mathematical induction Proof by contradiction Disjunction elimination == Notes ==
Wikipedia/Brute_force_method
The term figurate number is used by different writers for members of different sets of numbers, generalizing from triangular numbers to different shapes (polygonal numbers) and different dimensions (polyhedral numbers). The ancient Greek mathematicians already considered triangular numbers, polygonal numbers, tetrahedral numbers, and pyramidal numbers, and subsequent mathematicians have included other classes of these numbers including numbers defined from other types of polyhedra and from their analogs in other dimensions. == Terminology == Some kinds of figurate number were discussed in the 16th and 17th centuries under the name "figural number". In historical works about Greek mathematics the preferred term used to be figured number. In a use going back to Jacob Bernoulli's Ars Conjectandi, the term figurate number is used for triangular numbers made up of successive integers, tetrahedral numbers made up of successive triangular numbers, etc. These turn out to be the binomial coefficients. In this usage the square numbers (4, 9, 16, 25, ...) would not be considered figurate numbers when viewed as arranged in a square. A number of other sources use the term figurate number as synonymous for the polygonal numbers, either just the usual kind or both those and the centered polygonal numbers. == History == The mathematical study of figurate numbers is said to have originated with Pythagoras, possibly based on Babylonian or Egyptian precursors. Generating whichever class of figurate numbers the Pythagoreans studied using gnomons is also attributed to Pythagoras. Unfortunately, there is no trustworthy source for these claims, because all surviving writings about the Pythagoreans are from centuries later. Speusippus is the earliest source to expose the view that ten, as the fourth triangular number, was in fact the tetractys, supposed to be of great importance for Pythagoreanism. Figurate numbers were a concern of the Pythagorean worldview. It was well understood that some numbers could have many figurations, e.g. 36 is a both a square and a triangle and also various rectangles. The modern study of figurate numbers goes back to Pierre de Fermat, specifically the Fermat polygonal number theorem. Later, it became a significant topic for Euler, who gave an explicit formula for all triangular numbers that are also perfect squares, among many other discoveries relating to figurate numbers. Figurate numbers have played a significant role in modern recreational mathematics. In research mathematics, figurate numbers are studied by way of the Ehrhart polynomials, polynomials that count the number of integer points in a polygon or polyhedron when it is expanded by a given factor. == Triangular numbers and their analogs in higher dimensions == The triangular numbers for n = 1, 2, 3, ... are the result of the juxtaposition of the linear numbers (linear gnomons) for n = 1, 2, 3, ...: These are the binomial coefficients ( n + 1 2 ) {\displaystyle \textstyle {\binom {n+1}{2}}} . This is the case r = 2 of the fact that the rth diagonal of Pascal's triangle for r ≥ 0 consists of the figurate numbers for the r-dimensional analogs of triangles (r-dimensional simplices). The simplicial polytopic numbers for r = 1, 2, 3, 4, ... are: P 1 ( n ) = n 1 = ( n + 0 1 ) = ( n 1 ) {\displaystyle P_{1}(n)={\frac {n}{1}}={\binom {n+0}{1}}={\binom {n}{1}}} (linear numbers), P 2 ( n ) = n ( n + 1 ) 2 = ( n + 1 2 ) {\displaystyle P_{2}(n)={\frac {n(n+1)}{2}}={\binom {n+1}{2}}} (triangular numbers), P 3 ( n ) = n ( n + 1 ) ( n + 2 ) 6 = ( n + 2 3 ) {\displaystyle P_{3}(n)={\frac {n(n+1)(n+2)}{6}}={\binom {n+2}{3}}} (tetrahedral numbers), P 4 ( n ) = n ( n + 1 ) ( n + 2 ) ( n + 3 ) 24 = ( n + 3 4 ) {\displaystyle P_{4}(n)={\frac {n(n+1)(n+2)(n+3)}{24}}={\binom {n+3}{4}}} (pentachoric numbers, pentatopic numbers, 4-simplex numbers), ⋮ {\displaystyle \qquad \vdots } P r ( n ) = n ( n + 1 ) ( n + 2 ) ⋯ ( n + r − 1 ) r ! = ( n + ( r − 1 ) r ) {\displaystyle P_{r}(n)={\frac {n(n+1)(n+2)\cdots (n+r-1)}{r!}}={\binom {n+(r-1)}{r}}} (r-topic numbers, r-simplex numbers). The terms square number and cubic number derive from their geometric representation as a square or cube. The difference of two positive triangular numbers is a trapezoidal number. == Gnomon == The gnomon is the piece added to a figurate number to transform it to the next larger one. For example, the gnomon of the square number is the odd number, of the general form 2n + 1, n = 0, 1, 2, 3, .... The square of size 8 composed of gnomons looks like this: 1 2 3 4 5 6 7 8 2 2 3 4 5 6 7 8 3 3 3 4 5 6 7 8 4 4 4 4 5 6 7 8 5 5 5 5 5 6 7 8 6 6 6 6 6 6 7 8 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 {\displaystyle {\begin{matrix}1&2&3&4&5&6&7&8\\2&2&3&4&5&6&7&8\\3&3&3&4&5&6&7&8\\4&4&4&4&5&6&7&8\\5&5&5&5&5&6&7&8\\6&6&6&6&6&6&7&8\\7&7&7&7&7&7&7&8\\8&8&8&8&8&8&8&8\end{matrix}}} To transform from the n-square (the square of size n) to the (n + 1)-square, one adjoins 2n + 1 elements: one to the end of each row (n elements), one to the end of each column (n elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure. This gnomonic technique also provides a mathematical proof that the sum of the first n odd numbers is n2; the figure illustrates 1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 = 64 = 82. There is a similar gnomon with centered hexagonal numbers adding up to make cubes of each integer number. == Notes == == Further reading == Deza, Elena; Deza, Michel Marie (2012), Figurate Numbers, World Scientific, ISBN 978-981-4355-48-3
Wikipedia/Figurate_numbers
In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory. == Definitions == A subset I of a partially ordered set ( P , ≤ ) {\displaystyle (P,\leq )} is an ideal, if the following conditions hold: I is non-empty, for every x in I and y in P, y ≤ x implies that y is in I (I is a lower set), for every x, y in I, there is some element z in I, such that x ≤ z and y ≤ z (I is a directed set). While this is the most general way to define an ideal for arbitrary posets, it was originally defined for lattices only. In this case, the following equivalent definition can be given: a subset I of a lattice ( P , ≤ ) {\displaystyle (P,\leq )} is an ideal if and only if it is a lower set that is closed under finite joins (suprema); that is, it is nonempty and for all x, y in I, the element x ∨ y {\displaystyle x\vee y} of P is also in I. A weaker notion of order ideal is defined to be a subset of a poset P that satisfies the above conditions 1 and 2. In other words, an order ideal is simply a lower set. Similarly, an ideal can also be defined as a "directed lower set". The dual notion of an ideal, i.e., the concept obtained by reversing all ≤ and exchanging ∨ {\displaystyle \vee } with ∧ , {\displaystyle \wedge ,} is a filter. Frink ideals, pseudoideals and Doyle pseudoideals are different generalizations of the notion of a lattice ideal. An ideal or filter is said to be proper if it is not equal to the whole set P. The smallest ideal that contains a given element p is a principal ideal and p is said to be a principal element of the ideal in this situation. The principal ideal ↓ p {\displaystyle \downarrow p} for a principal p is thus given by ↓ p = {x ∈ P | x ≤ p}. == Terminology confusion == The above definitions of "ideal" and "order ideal" are the standard ones, but there is some confusion in terminology. Sometimes the words and definitions such as "ideal", "order ideal", "Frink ideal", or "partial order ideal" mean one another. == Prime ideals == An important special case of an ideal is constituted by those ideals whose set-theoretic complements are filters, i.e. ideals in the inverse order. Such ideals are called prime ideals. Also note that, since we require ideals and filters to be non-empty, every prime ideal is necessarily proper. For lattices, prime ideals can be characterized as follows: A subset I of a lattice ( P , ≤ ) {\displaystyle (P,\leq )} is a prime ideal, if and only if I is a proper ideal of P, and for all elements x and y of P, x ∧ y {\displaystyle x\wedge y} in I implies that x ∈ I or y ∈ I. It is easily checked that this is indeed equivalent to stating that P ∖ I {\displaystyle P\setminus I} is a filter (which is then also prime, in the dual sense). For a complete lattice the further notion of a completely prime ideal is meaningful. It is defined to be a proper ideal I with the additional property that, whenever the meet (infimum) of some arbitrary set A is in I, some element of A is also in I. So this is just a specific prime ideal that extends the above conditions to infinite meets. The existence of prime ideals is in general not obvious, and often a satisfactory amount of prime ideals cannot be derived within ZF (Zermelo–Fraenkel set theory without the axiom of choice). This issue is discussed in various prime ideal theorems, which are necessary for many applications that require prime ideals. == Maximal ideals == An ideal I is a maximal ideal if it is proper and there is no proper ideal J that is a strict superset of I. Likewise, a filter F is maximal if it is proper and there is no proper filter that is a strict superset. When a poset is a distributive lattice, maximal ideals and filters are necessarily prime, while the converse of this statement is false in general. Maximal filters are sometimes called ultrafilters, but this terminology is often reserved for Boolean algebras, where a maximal filter (ideal) is a filter (ideal) that contains exactly one of the elements {a, ¬a}, for each element a of the Boolean algebra. In Boolean algebras, the terms prime ideal and maximal ideal coincide, as do the terms prime filter and maximal filter. There is another interesting notion of maximality of ideals: Consider an ideal I and a filter F such that I is disjoint from F. We are interested in an ideal M that is maximal among all ideals that contain I and are disjoint from F. In the case of distributive lattices such an M is always a prime ideal. A proof of this statement follows. However, in general it is not clear whether there exists any ideal M that is maximal in this sense. Yet, if we assume the axiom of choice in our set theory, then the existence of M for every disjoint filter–ideal-pair can be shown. In the special case that the considered order is a Boolean algebra, this theorem is called the Boolean prime ideal theorem. It is strictly weaker than the axiom of choice and it turns out that nothing more is needed for many order-theoretic applications of ideals. == Applications == The construction of ideals and filters is an important tool in many applications of order theory. In Stone's representation theorem for Boolean algebras, the maximal ideals (or, equivalently via the negation map, ultrafilters) are used to obtain the set of points of a topological space, whose clopen sets are isomorphic to the original Boolean algebra. Order theory knows many completion procedures to turn posets into posets with additional completeness properties. For example, the ideal completion of a given partial order P is the set of all ideals of P ordered by subset inclusion. This construction yields the free dcpo generated by P. An ideal is principal if and only if it is compact in the ideal completion, so the original poset can be recovered as the sub-poset consisting of compact elements. Furthermore, every algebraic dcpo can be reconstructed as the ideal completion of its set of compact elements. == History == Ideals were introduced by Marshall H. Stone first for Boolean algebras, where the name was derived from the ring ideals of abstract algebra. He adopted this terminology because, using the isomorphism of the categories of Boolean algebras and of Boolean rings, the two notions do indeed coincide. Generalization to any posets was done by Frink. == See also == Filter (mathematics) – In mathematics, a special subset of a partially ordered set Ideal (ring theory) – Submodule of a mathematical ring Ideal (set theory) – Non-empty family of sets that is closed under finite unions and subsets Semigroup ideal Boolean prime ideal theorem – Ideals in a Boolean algebra can be extended to prime ideals == Notes == == References == Burris, Stanley N.; Sankappanavar, Hanamantagouda P. (1981). A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2. Davey, Brian A.; Priestley, Hilary Ann (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 0-521-78451-4. Taylor, Paul (1999), Practical foundations of mathematics, Cambridge Studies in Advanced Mathematics, vol. 59, Cambridge University Press, Cambridge, ISBN 0-521-63107-6, MR 1694820 Frenchman, Zack; Hart, James (2020), An Introduction to Order Theory, AMS === About history === Stone, M. H. (1934), "Boolean Algebras and Their Application to Topology", Proc. Natl. Acad. Sci. U.S.A., 20 (3): 197–202, Bibcode:1934PNAS...20..197S, doi:10.1073/pnas.20.3.197, PMC 1076376, PMID 16587875 Stone, M. H. (1935), "Subsumption of the Theory of Boolean Algebras under the Theory of Rings", Proc. Natl. Acad. Sci. U.S.A., 21 (2): 103–105, Bibcode:1935PNAS...21..103S, doi:10.1073/pnas.21.2.103, PMC 1076539, PMID 16587931 Frink, Orrin (1954), "Ideals In Partially Ordered Sets", Am. Math. Mon., 61 (4): 223–234, doi:10.1080/00029890.1954.11988449
Wikipedia/Ideal_(order_theory)
In commutative and homological algebra, the grade of a finitely generated module M {\displaystyle M} over a Noetherian ring R {\displaystyle R} is a cohomological invariant defined by vanishing of Ext-modules grade M = grade R M = inf { i ∈ N 0 : Ext R i ( M , R ) ≠ 0 } . {\displaystyle {\textrm {grade}}\,M={\textrm {grade}}_{R}\,M=\inf \left\{i\in \mathbb {N} _{0}:{\textrm {Ext}}_{R}^{i}(M,R)\neq 0\right\}.} For an ideal I ◃ R {\displaystyle I\triangleleft R} the grade is defined via the quotient ring viewed as a module over R {\displaystyle R} grade I = grade R I = grade R R / I = inf { i ∈ N 0 : Ext R i ( R / I , R ) ≠ 0 } . {\displaystyle {\textrm {grade}}\,I={\textrm {grade}}_{R}\,I={\textrm {grade}}_{R}\,R/I=\inf \left\{i\in \mathbb {N} _{0}:{\textrm {Ext}}_{R}^{i}(R/I,R)\neq 0\right\}.} The grade is used to define perfect ideals. In general we have the inequality grade R I ≤ proj dim ⁡ ( R / I ) {\displaystyle {\textrm {grade}}_{R}\,I\leq {\textrm {proj}}\dim(R/I)} where the projective dimension is another cohomological invariant. The grade is tightly related to the depth, since grade R I = depth I ( R ) . {\displaystyle {\textrm {grade}}_{R}\,I={\textrm {depth}}_{I}(R).} Under the same conditions on R , I {\displaystyle R,I} and M {\displaystyle M} as above, one also defines the M {\displaystyle M} -grade of I {\displaystyle I} as grade M I = inf { i ∈ N 0 : Ext R i ( R / I , M ) ≠ 0 } . {\displaystyle {\textrm {grade}}_{M}\,I=\inf \left\{i\in \mathbb {N} _{0}:{\textrm {Ext}}_{R}^{i}(R/I,M)\neq 0\right\}.} This notion is tied to the existence of maximal M {\displaystyle M} -sequences contained in I {\displaystyle I} of length grade M I {\displaystyle {\textrm {grade}}_{M}\,I} . == References ==
Wikipedia/Grade_(ring_theory)
In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R, or a module of finite type. Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide. A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group. == Definition == The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan. The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map: R n → M {\displaystyle R^{n}\to M} for some n; in other words, M is a quotient of a free module of finite rank. If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express the generators in any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example the set of the prime numbers is a generating set of Z {\displaystyle \mathbb {Z} } viewed as Z {\displaystyle \mathbb {Z} } -module, and a generating set formed from prime numbers has at least two elements, while the singleton{1} is also a generating set. In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements: this is the dimension theorem for vector spaces). Any module is the union of the directed set of its finitely generated submodules. A module M is finitely generated if and only if any increasing chain Mi of submodules with union M stabilizes: i.e., there is some i such that Mi = M. This fact with Zorn's lemma implies that every nonzero finitely generated module admits maximal submodules. If any increasing chain of submodules stabilizes (i.e., any submodule is finitely generated), then the module M is called a Noetherian module. == Examples == If a module is generated by one element, it is called a cyclic module. Let R be an integral domain with K its field of fractions. Then every finitely generated R-submodule I of K is a fractional ideal: that is, there is some nonzero r in R such that rI is contained in R. Indeed, one can take r to be the product of the denominators of the generators of I. If R is Noetherian, then every fractional ideal arises in this way. Finitely generated modules over the ring of integers Z coincide with the finitely generated abelian groups. These are completely classified by the structure theorem, taking Z as the principal ideal domain. Finitely generated (say left) modules over a division ring are precisely finite dimensional vector spaces (over the division ring). == Some facts == Every homomorphic image of a finitely generated module is finitely generated. In general, submodules of finitely generated modules need not be finitely generated. As an example, consider the ring R = Z[X1, X2, ...] of all polynomials in countably many variables. R itself is a finitely generated R-module (with {1} as generating set). Consider the submodule K consisting of all those polynomials with zero constant term. Since every polynomial contains only finitely many terms whose coefficients are non-zero, the R-module K is not finitely generated. In general, a module is said to be Noetherian if every submodule is finitely generated. A finitely generated module over a Noetherian ring is a Noetherian module (and indeed this property characterizes Noetherian rings): A module over a Noetherian ring is finitely generated if and only if it is a Noetherian module. This resembles, but is not exactly Hilbert's basis theorem, which states that the polynomial ring R[X] over a Noetherian ring R is Noetherian. Both facts imply that a finitely generated commutative algebra over a Noetherian ring is again a Noetherian ring. More generally, an algebra (e.g., ring) that is a finitely generated module is a finitely generated algebra. Conversely, if a finitely generated algebra is integral (over the coefficient ring), then it is finitely generated module. (See integral element for more.) Let 0 → M′ → M → M′′ → 0 be an exact sequence of modules. Then M is finitely generated if M′, M′′ are finitely generated. There are some partial converses to this. If M is finitely generated and M′′ is finitely presented (which is stronger than finitely generated; see below), then M′ is finitely generated. Also, M is Noetherian (resp. Artinian) if and only if M′, M′′ are Noetherian (resp. Artinian). Let B be a ring and A its subring such that B is a faithfully flat right A-module. Then a left A-module F is finitely generated (resp. finitely presented) if and only if the B-module B ⊗A F is finitely generated (resp. finitely presented). == Finitely generated modules over a commutative ring == For finitely generated modules over a commutative ring R, Nakayama's lemma is fundamental. Sometimes, the lemma allows one to prove finite dimensional vector spaces phenomena for finitely generated modules. For example, if f : M → M is a surjective R-endomorphism of a finitely generated module M, then f is also injective, and hence is an automorphism of M. This says simply that M is a Hopfian module. Similarly, an Artinian module M is coHopfian: any injective endomorphism f is also a surjective endomorphism. The Forster–Swan theorem gives an upper bound for the minimal number of generators of a finitely generated module M over a commutative Noetherian ring. Any R-module is an inductive limit of finitely generated R-submodules. This is useful for weakening an assumption to the finite case (e.g., the characterization of flatness with the Tor functor). An example of a link between finite generation and integral elements can be found in commutative algebras. To say that a commutative algebra A is a finitely generated ring over R means that there exists a set of elements G = {x1, ..., xn} of A such that the smallest subring of A containing G and R is A itself. Because the ring product may be used to combine elements, more than just R-linear combinations of elements of G are generated. For example, a polynomial ring R[x] is finitely generated by {1, x} as a ring, but not as a module. If A is a commutative algebra (with unity) over R, then the following two statements are equivalent: A is a finitely generated R module. A is both a finitely generated ring over R and an integral extension of R. == Generic rank == Let M be a finitely generated module over an integral domain A with the field of fractions K. Then the dimension dim K ⁡ ( M ⊗ A K ) {\displaystyle \operatorname {dim} _{K}(M\otimes _{A}K)} is called the generic rank of M over A. This number is the same as the number of maximal A-linearly independent vectors in M or equivalently the rank of a maximal free submodule of M (cf. Rank of an abelian group). Since ( M / F ) ( 0 ) = M ( 0 ) / F ( 0 ) = 0 {\displaystyle (M/F)_{(0)}=M_{(0)}/F_{(0)}=0} , M / F {\displaystyle M/F} is a torsion module. When A is Noetherian, by generic freeness, there is an element f (depending on M) such that M [ f − 1 ] {\displaystyle M[f^{-1}]} is a free A [ f − 1 ] {\displaystyle A[f^{-1}]} -module. Then the rank of this free module is the generic rank of M. Now suppose the integral domain A is an N {\displaystyle \mathbb {N} } -graded algebra over a field k generated by finitely many homogeneous elements of degrees d i {\displaystyle d_{i}} . Suppose M is graded as well and let P M ( t ) = ∑ ( dim k ⁡ M n ) t n {\displaystyle P_{M}(t)=\sum (\operatorname {dim} _{k}M_{n})t^{n}} be the Poincaré series of M. By the Hilbert–Serre theorem, there is a polynomial F such that P M ( t ) = F ( t ) ∏ ( 1 − t d i ) − 1 {\displaystyle P_{M}(t)=F(t)\prod (1-t^{d_{i}})^{-1}} . Then F ( 1 ) {\displaystyle F(1)} is the generic rank of M. A finitely generated module over a principal ideal domain is torsion-free if and only if it is free. This is a consequence of the structure theorem for finitely generated modules over a principal ideal domain, the basic form of which says a finitely generated module over a PID is a direct sum of a torsion module and a free module. But it can also be shown directly as follows: let M be a torsion-free finitely generated module over a PID A and F a maximal free submodule. Let f be in A such that f M ⊂ F {\displaystyle fM\subset F} . Then f M {\displaystyle fM} is free since it is a submodule of a free module and A is a PID. But now f : M → f M {\displaystyle f:M\to fM} is an isomorphism since M is torsion-free. By the same argument as above, a finitely generated module over a Dedekind domain A (or more generally a semi-hereditary ring) is torsion-free if and only if it is projective; consequently, a finitely generated module over A is a direct sum of a torsion module and a projective module. A finitely generated projective module over a Noetherian integral domain has constant rank and so the generic rank of a finitely generated module over A is the rank of its projective part. == Equivalent definitions and finitely cogenerated modules == The following conditions are equivalent to M being finitely generated (f.g.): For any family of submodules {Ni | i ∈ I} in M, if ∑ i ∈ I N i = M {\displaystyle \sum _{i\in I}N_{i}=M\,} , then ∑ i ∈ F N i = M {\displaystyle \sum _{i\in F}N_{i}=M\,} for some finite subset F of I. For any chain of submodules {Ni | i ∈ I} in M, if ⋃ i ∈ I N i = M {\displaystyle \bigcup _{i\in I}N_{i}=M\,} , then Ni = M for some i in I. If ϕ : ⨁ i ∈ I R → M {\displaystyle \phi :\bigoplus _{i\in I}R\to M\,} is an epimorphism, then the restriction ϕ : ⨁ i ∈ F R → M {\displaystyle \phi :\bigoplus _{i\in F}R\to M\,} is an epimorphism for some finite subset F of I. From these conditions it is easy to see that being finitely generated is a property preserved by Morita equivalence. The conditions are also convenient to define a dual notion of a finitely cogenerated module M. The following conditions are equivalent to a module being finitely cogenerated (f.cog.): For any family of submodules {Ni | i ∈ I} in M, if ⋂ i ∈ I N i = { 0 } {\displaystyle \bigcap _{i\in I}N_{i}=\{0\}\,} , then ⋂ i ∈ F N i = { 0 } {\displaystyle \bigcap _{i\in F}N_{i}=\{0\}\,} for some finite subset F of I. For any chain of submodules {Ni | i ∈ I} in M, if ⋂ i ∈ I N i = { 0 } {\displaystyle \bigcap _{i\in I}N_{i}=\{0\}\,} , then Ni = {0} for some i in I. If ϕ : M → ∏ i ∈ I N i {\displaystyle \phi :M\to \prod _{i\in I}N_{i}\,} is a monomorphism, where each N i {\displaystyle N_{i}} is an R module, then ϕ : M → ∏ i ∈ F N i {\displaystyle \phi :M\to \prod _{i\in F}N_{i}\,} is a monomorphism for some finite subset F of I. Both f.g. modules and f.cog. modules have interesting relationships to Noetherian and Artinian modules, and the Jacobson radical J(M) and socle soc(M) of a module. The following facts illustrate the duality between the two conditions. For a module M: M is Noetherian if and only if every submodule N of M is f.g. M is Artinian if and only if every quotient module M/N is f.cog. M is f.g. if and only if J(M) is a superfluous submodule of M, and M/J(M) is f.g. M is f.cog. if and only if soc(M) is an essential submodule of M, and soc(M) is f.g. If M is a semisimple module (such as soc(N) for any module N), it is f.g. if and only if f.cog. If M is f.g. and nonzero, then M has a maximal submodule and any quotient module M/N is f.g. If M is f.cog. and nonzero, then M has a minimal submodule, and any submodule N of M is f.cog. If N and M/N are f.g. then so is M. The same is true if "f.g." is replaced with "f.cog." Finitely cogenerated modules must have finite uniform dimension. This is easily seen by applying the characterization using the finitely generated essential socle. Somewhat asymmetrically, finitely generated modules do not necessarily have finite uniform dimension. For example, an infinite direct product of nonzero rings is a finitely generated (cyclic!) module over itself, however it clearly contains an infinite direct sum of nonzero submodules. Finitely generated modules do not necessarily have finite co-uniform dimension either: any ring R with unity such that R/J(R) is not a semisimple ring is a counterexample. == Finitely presented, finitely related, and coherent modules == Another formulation is this: a finitely generated module M is one for which there is an epimorphism mapping Rk onto M : f : Rk → M. Suppose now there is an epimorphism, φ : F → M. for a module M and free module F. If the kernel of φ is finitely generated, then M is called a finitely related module. Since M is isomorphic to F/ker(φ), this basically expresses that M is obtained by taking a free module and introducing finitely many relations within F (the generators of ker(φ)). If the kernel of φ is finitely generated and F has finite rank (i.e. F = Rk), then M is said to be a finitely presented module. Here, M is specified using finitely many generators (the images of the k generators of F = Rk) and finitely many relations (the generators of ker(φ)). See also: free presentation. Finitely presented modules can be characterized by an abstract property within the category of R-modules: they are precisely the compact objects in this category. A coherent module M is a finitely generated module whose finitely generated submodules are finitely presented. Over any ring R, coherent modules are finitely presented, and finitely presented modules are both finitely generated and finitely related. For a Noetherian ring R, finitely generated, finitely presented, and coherent are equivalent conditions on a module. Some crossover occurs for projective or flat modules. A finitely generated projective module is finitely presented, and a finitely related flat module is projective. It is true also that the following conditions are equivalent for a ring R: R is a right coherent ring. The module RR is a coherent module. Every finitely presented right R module is coherent. Although coherence seems like a more cumbersome condition than finitely generated or finitely presented, it is nicer than them since the category of coherent modules is an abelian category, while, in general, neither finitely generated nor finitely presented modules form an abelian category. == See also == Integral element Artin–Rees lemma Countably generated module Finite algebra Coherent sheaf, a generalization used in algebraic geometry == References == == Textbooks == Atiyah, M. F.; Macdonald, I. G. (1969), Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., pp. ix+128, MR 0242802 Bourbaki, Nicolas (1998), Commutative algebra. Chapters 1--7 Translated from the French. Reprint of the 1989 English translation, Elements of Mathematics, Berlin: Springer-Verlag, ISBN 3-540-64239-0 Kaplansky, Irving (1970), Commutative rings, Boston, Mass.: Allyn and Bacon Inc., pp. x+180, MR 0254021 Lam, T. Y. (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Springer-Verlag, ISBN 978-0-387-98428-5 Lang, Serge (1997), Algebra (3rd ed.), Addison-Wesley, ISBN 978-0-201-55540-0 Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8, Translated from the Japanese by M. Reid (2 ed.), Cambridge: Cambridge University Press, pp. xiv+320, ISBN 0-521-36764-6, MR 1011461 Springer, Tonny A. (1977), Invariant theory, Lecture Notes in Mathematics, vol. 585, Springer, doi:10.1007/BFb0095644, ISBN 978-3-540-08242-2.
Wikipedia/Finitely_generated_module
Zorn's lemma, also known as the Kuratowski–Zorn lemma, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element. The lemma was proved (assuming the axiom of choice) by Kazimierz Kuratowski in 1922 and independently by Max Zorn in 1935. It occurs in the proofs of several theorems of crucial importance, for instance the Hahn–Banach theorem in functional analysis, the theorem that every vector space has a basis, Tychonoff's theorem in topology stating that every product of compact spaces is compact, and the theorems in abstract algebra that in a ring with identity every proper ideal is contained in a maximal ideal and that every field has an algebraic closure. Zorn's lemma is equivalent to the well-ordering theorem and also to the axiom of choice, in the sense that within ZF (Zermelo–Fraenkel set theory without the axiom of choice) any one of the three is sufficient to prove the other two. An earlier formulation of Zorn's lemma is the Hausdorff maximal principle which states that every totally ordered subset of a given partially ordered set is contained in a maximal totally ordered subset of that partially ordered set. == Motivation == To prove the existence of a mathematical object that can be viewed as a maximal element in some partially ordered set in some way, one can try proving the existence of such an object by assuming there is no maximal element and using transfinite induction and the assumptions of the situation to get a contradiction. Zorn's lemma tidies up the conditions a situation needs to satisfy in order for such an argument to work and enables mathematicians to not have to repeat the transfinite induction argument by hand each time, but just check the conditions of Zorn's lemma. If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you. == Statement of the lemma == Preliminary notions: A set P equipped with a binary relation ≤ that is reflexive (x ≤ x for every x), antisymmetric (if both x ≤ y and y ≤ x hold, then x = y), and transitive (if x ≤ y and y ≤ z then x ≤ z) is said to be (partially) ordered by ≤. Given two elements x and y of P with x ≤ y, y is said to be greater than or equal to x. The word "partial" is meant to indicate that not every pair of elements of a partially ordered set is required to be comparable under the order relation, that is, in a partially ordered set P with order relation ≤ there may be elements x and y with neither x ≤ y nor y ≤ x. An ordered set in which every pair of elements is comparable is called totally ordered. Every subset S of a partially ordered set P can itself be seen as partially ordered by restricting the order relation inherited from P to S. A subset S of a partially ordered set P is called a chain (in P) if it is totally ordered in the inherited order. An element m of a partially ordered set P with order relation ≤ is maximal (with respect to ≤) if there is no other element of P greater than m, that is, there is no s in P with s ≠ m and m ≤ s. Depending on the order relation, a partially ordered set may have any number of maximal elements. However, a totally ordered set can have at most one maximal element. Given a subset S of a partially ordered set P, an element u of P is an upper bound of S if it is greater than or equal to every element of S. Here, S is not required to be a chain, and u is required to be comparable to every element of S but need not itself be an element of S. Zorn's lemma can then be stated as: In fact, property (1) is redundant, since property (2) says, in particular, that the empty chain has an upper bound in P {\displaystyle P} , implying P {\displaystyle P} is nonempty. However, in practice, one often checks (1) and then verifies (2) only for nonempty chains, since the case of the empty chain is taken care by (1). In the terminology of Bourbaki, a partially ordered set is called inductive if each chain has an upper bound in the set (in particular, the set is then nonempty). Then the lemma can be stated as: For some applications, the following variant may be useful. Indeed, let Q = { x ∈ P ∣ x ≥ a } {\displaystyle Q=\{x\in P\mid x\geq a\}} with the partial ordering from P {\displaystyle P} . Then, for a chain in Q {\displaystyle Q} , an upper bound in P {\displaystyle P} is in Q {\displaystyle Q} and so Q {\displaystyle Q} satisfies the hypothesis of Zorn's lemma and a maximal element in Q {\displaystyle Q} is a maximal element in P {\displaystyle P} as well. == Example applications == === Every vector space has a basis === Zorn's lemma can be used to show that every vector space V has a basis. If V = {0}, then the empty set is a basis for V. Now, suppose that V ≠ {0}. Let P be the set consisting of all linearly independent subsets of V. Since V is not the zero vector space, there exists a nonzero element v of V, so P contains the linearly independent subset {v}. Furthermore, P is partially ordered by set inclusion (see inclusion order). Finding a maximal linearly independent subset of V is the same as finding a maximal element in P. To apply Zorn's lemma, take a chain T in P (that is, T is a subset of P that is totally ordered). If T is the empty set, then {v} is an upper bound for T in P. Suppose then that T is non-empty. We need to show that T has an upper bound, that is, there exists a linearly independent subset B of V containing all the members of T. Take B to be the union of all the sets in T. We wish to show that B is an upper bound for T in P. To do this, it suffices to show that B is a linearly independent subset of V. Suppose otherwise, that B is not linearly independent. Then there exists vectors v1, v2, ..., vk ∈ B and scalars a1, a2, ..., ak, not all zero, such that a 1 v 1 + a 2 v 2 + ⋯ + a k v k = 0 . {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} .} Since B is the union of all the sets in T, there are some sets S1, S2, ..., Sk ∈ T such that vi ∈ Si for every i = 1, 2, ..., k. As T is totally ordered, one of the sets S1, S2, ..., Sk must contain the others, so there is some set Si that contains all of v1, v2, ..., vk. This tells us there is a linearly dependent set of vectors in Si, contradicting that Si is linearly independent (because it is a member of P). The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal linearly independent subset B of V. Finally, we show that B is indeed a basis of V. It suffices to show that B is a spanning set of V. Suppose for the sake of contradiction that B is not spanning. Then there exists some v ∈ V not covered by the span of B. This says that B ∪ {v} is a linearly independent subset of V that is larger than B, contradicting the maximality of B. Therefore, B is a spanning set of V, and thus, a basis of V. === Every nontrivial ring with unity contains a maximal ideal === Zorn's lemma can be used to show that every nontrivial ring R with unity contains a maximal ideal. Let P be the set consisting of all proper ideals in R (that is, all ideals in R except R itself). Since R is non-trivial, the set P contains the trivial ideal {0}. Furthermore, P is partially ordered by set inclusion. Finding a maximal ideal in R is the same as finding a maximal element in P. To apply Zorn's lemma, take a chain T in P. If T is empty, then the trivial ideal {0} is an upper bound for T in P. Assume then that T is non-empty. It is necessary to show that T has an upper bound, that is, there exists an ideal I ⊆ R containing all the members of T but still smaller than R (otherwise it would not be a proper ideal, so it is not in P). Take I to be the union of all the ideals in T. We wish to show that I is an upper bound for T in P. We will first show that I is an ideal of R. For I to be an ideal, it must satisfy three conditions: I is a nonempty subset of R, For every x, y ∈ I, the sum x + y is in I, For every r ∈ R and every x ∈ I, the product rx is in I. #1 - I is a nonempty subset of R. Because T contains at least one element, and that element contains at least 0, the union I contains at least 0 and is not empty. Every element of T is a subset of R, so the union I only consists of elements in R. #2 - For every x, y ∈ I, the sum x + y is in I. Suppose x and y are elements of I. Then there exist two ideals J, K ∈ T such that x is an element of J and y is an element of K. Since T is totally ordered, we know that J ⊆ K or K ⊆ J. Without loss of generality, assume the first case. Both x and y are members of the ideal K, therefore their sum x + y is a member of K, which shows that x + y is a member of I. #3 - For every r ∈ R and every x ∈ I, the product rx is in I. Suppose x is an element of I. Then there exists an ideal J ∈ T such that x is in J. If r ∈ R, then rx is an element of J and hence an element of I. Thus, I is an ideal in R. Now, we show that I is a proper ideal. An ideal is equal to R if and only if it contains 1. (It is clear that if it is R then it contains 1; on the other hand, if it contains 1 and r is an arbitrary element of R, then r1 = r is an element of the ideal, and so the ideal is equal to R.) So, if I were equal to R, then it would contain 1, and that means one of the members of T would contain 1 and would thus be equal to R – but R is explicitly excluded from P. The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal ideal in R. == Proof sketch == A sketch of the proof of Zorn's lemma follows, assuming the axiom of choice. Suppose the lemma is false. Then there exists a partially ordered set, or poset, P such that every totally ordered subset has an upper bound, and that for every element in P there is another element bigger than it. For every totally ordered subset T we may then define a bigger element b(T), because T has an upper bound, and that upper bound has a bigger element. To actually define the function b, we need to employ the axiom of choice (explicitly: let B ( T ) = { b ∈ P : ∀ t ∈ T , b ≥ t } {\displaystyle B(T)=\{b\in P:\forall t\in T,b\geq t\}} , that is, the set of upper bounds for T. The axiom of choice furnishes b : b ( T ) ∈ B ( T ) {\displaystyle b:b(T)\in B(T)} ). Using the function b, we are going to define elements a0 < a1 < a2 < a3 < ... < aω < aω+1 <…, in P. This uncountable sequence is really long: the indices are not just the natural numbers, but all ordinals. In fact, the sequence is too long for the set P; there are too many ordinals (a proper class), more than there are elements in any set (in other words, given any set of ordinals, there exists a larger ordinal), and the set P will be exhausted before long and then we will run into the desired contradiction. The ai are defined by transfinite recursion: we pick a0 in P arbitrary (this is possible, since P contains an upper bound for the empty set and is thus not empty) and for any other ordinal w we set aw = b({av : v < w}). Because the av are totally ordered, this is a well-founded definition. The above proof can be formulated without explicitly referring to ordinals by considering the initial segments {av : v < w} as subsets of P. Such sets can be easily characterized as well-ordered chains S ⊆ P where each x ∈ S satisfies x = b({y ∈ S : y < x}). Contradiction is reached by noting that we can always find a "next" initial segment either by taking the union of all such S (corresponding to the limit ordinal case) or by appending b(S) to the "last" S (corresponding to the successor ordinal case). This proof shows that actually a slightly stronger version of Zorn's lemma is true: Alternatively, one can use the same proof for the Hausdorff maximal principle. This is the proof given for example in Halmos' Naive Set Theory or in § Proof below. Finally, the Bourbaki–Witt theorem can also be used to give a proof. == Proof == The basic idea of the proof is to reduce the proof to proving the following weak form of Zorn's lemma: (Note that, strictly speaking, (1) is redundant since (2) implies the empty set is in F {\displaystyle F} .) Note the above is a weak form of Zorn's lemma since Zorn's lemma says in particular that any set of subsets satisfying the above (1) and (2) has a maximal element ((3) is not needed). The point is that, conversely, Zorn's lemma follows from this weak form. Indeed, let F {\displaystyle F} be the set of all chains in P {\displaystyle P} . Then it satisfies all of the above properties (it is nonempty since the empty subset is a chain.) Thus, by the above weak form, we find a maximal element C {\displaystyle C} in F {\displaystyle F} ; i.e., a maximal chain in P {\displaystyle P} . By the hypothesis of Zorn's lemma, C {\displaystyle C} has an upper bound x {\displaystyle x} in P {\displaystyle P} . Then this x {\displaystyle x} is a maximal element since if y ≥ x {\displaystyle y\geq x} , then C ~ = C ∪ { y } {\displaystyle {\widetilde {C}}=C\cup \{y\}} is larger than or equal to C {\displaystyle C} and so C ~ = C {\displaystyle {\widetilde {C}}=C} . Thus, y = x {\displaystyle y=x} . The proof of the weak form is given in Hausdorff maximal principle#Proof. Indeed, the existence of a maximal chain is exactly the assertion of the Hausdorff maximal principle. The same proof also shows the following equivalent variant of Zorn's lemma: Indeed, trivially, Zorn's lemma implies the above lemma. Conversely, the above lemma implies the aforementioned weak form of Zorn's lemma, since a union gives a least upper bound. == Zorn's lemma implies the axiom of choice == A proof that Zorn's lemma implies the axiom of choice illustrates a typical application of Zorn's lemma. (The structure of the proof is exactly the same as the one for the Hahn–Banach theorem.) Given a set X {\displaystyle X} of nonempty sets and its union U := ⋃ X {\displaystyle U:=\bigcup X} (which exists by the axiom of union), we want to show there is a function f : X → U {\displaystyle f:X\to U} such that f ( S ) ∈ S {\displaystyle f(S)\in S} for each S ∈ X {\displaystyle S\in X} . For that end, consider the set P = { f : X ′ → U ∣ X ′ ⊂ X , f ( S ) ∈ S } {\displaystyle P=\{f:X'\to U\mid X'\subset X,f(S)\in S\}} . It is partially ordered by extension; i.e., f ≤ g {\displaystyle f\leq g} if and only if f {\displaystyle f} is the restriction of g {\displaystyle g} . If f i : X i → U {\displaystyle f_{i}:X_{i}\to U} is a chain in P {\displaystyle P} , then we can define the function f {\displaystyle f} on the union X ′ = ∪ i X i {\displaystyle X'=\cup _{i}X_{i}} by setting f ( x ) = f i ( x ) {\displaystyle f(x)=f_{i}(x)} when x ∈ X i {\displaystyle x\in X_{i}} . This is well-defined since if i < j {\displaystyle i<j} , then f i {\displaystyle f_{i}} is the restriction of f j {\displaystyle f_{j}} . The function f {\displaystyle f} is also an element of P {\displaystyle P} and is a common extension of all f i {\displaystyle f_{i}} 's. Thus, we have shown that each chain in P {\displaystyle P} has an upper bound in P {\displaystyle P} . Hence, by Zorn's lemma, there is a maximal element f {\displaystyle f} in P {\displaystyle P} that is defined on some X ′ ⊂ X {\displaystyle X'\subset X} . We want to show X ′ = X {\displaystyle X'=X} . Suppose otherwise; then there is a set S ∈ X − X ′ {\displaystyle S\in X-X'} . As S {\displaystyle S} is nonempty, it contains an element s {\displaystyle s} . We can then extend f {\displaystyle f} to a function g {\displaystyle g} by setting g | X ′ = f {\displaystyle g|_{X'}=f} and g ( S ) = s {\displaystyle g(S)=s} . (Note this step does not need the axiom of choice.) The function g {\displaystyle g} is in P {\displaystyle P} and f < g {\displaystyle f<g} , a contradiction to the maximality of f {\displaystyle f} . ◻ {\displaystyle \square } Essentially the same proof also shows that Zorn's lemma implies the well-ordering theorem: take P {\displaystyle P} to be the set of all well-ordered subsets of a given set X {\displaystyle X} and then shows a maximal element of P {\displaystyle P} is X {\displaystyle X} . == History == The Hausdorff maximal principle is an early statement similar to Zorn's lemma. Kazimierz Kuratowski proved in 1922 a version of the lemma close to its modern formulation (it applies to sets ordered by inclusion and closed under unions of well-ordered chains). Essentially the same formulation (weakened by using arbitrary chains, not just well-ordered) was independently given by Max Zorn in 1935, who proposed it as a new axiom of set theory replacing the well-ordering theorem, exhibited some of its applications in algebra, and promised to show its equivalence with the axiom of choice in another paper, which never appeared. The name "Zorn's lemma" appears to be due to John Tukey, who used it in his book Convergence and Uniformity in Topology in 1940. Bourbaki's Théorie des Ensembles of 1939 refers to a similar maximal principle as "le théorème de Zorn". The name "Kuratowski–Zorn lemma" prevails in Poland and Russia. == Equivalent forms of Zorn's lemma == Zorn's lemma is equivalent (in ZF) to three main results: Hausdorff maximal principle Axiom of choice Well-ordering theorem. A well-known joke alluding to this equivalency (which may defy human intuition) is attributed to Jerry Bona: "The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" Zorn's lemma is also equivalent to the strong completeness theorem of first-order logic. Moreover, Zorn's lemma (or one of its equivalent forms) implies some major results in other mathematical areas. For example, Banach's extension theorem which is used to prove one of the most fundamental results in functional analysis, the Hahn–Banach theorem Every vector space has a basis, a result from linear algebra (to which it is equivalent). In particular, the real numbers, as a vector space over the rational numbers, possess a Hamel basis. Every commutative unital ring has a maximal ideal, a result from ring theory known as Krull's theorem, to which Zorn's lemma is equivalent Tychonoff's theorem in topology (to which it is also equivalent) Every proper filter is contained in an ultrafilter, a result that yields the completeness theorem of first-order logic In this sense, Zorn's lemma is a powerful tool, applicable to many areas of mathematics. === Analogs under weakenings of the axiom of choice === A weakened form of Zorn's lemma can be proven from ZF + DC (Zermelo–Fraenkel set theory with the axiom of choice replaced by the axiom of dependent choice). Zorn's lemma can be expressed straightforwardly by observing that the set having no maximal element would be equivalent to stating that the set's ordering relation would be entire, which would allow us to apply the axiom of dependent choice to construct a countable chain. As a result, any partially ordered set with exclusively finite chains must have a maximal element. More generally, strengthening the axiom of dependent choice to higher ordinals allows us to generalize the statement in the previous paragraph to higher cardinalities. In the limit where we allow arbitrarily large ordinals, we recover the proof of the full Zorn's lemma using the axiom of choice in the preceding section. == In popular culture == The 1970 film Zorns Lemma is named after the lemma. The lemma was referenced on The Simpsons in the episode "Bart's New Friend". == See also == Antichain – Subset of incomparable elements Chain-complete partial order – a partially ordered set in which every chain has a least upper bound Szpilrajn extension theorem – Mathematical result on order relations Tarski finiteness – Mathematical set containing a finite number of elementsPages displaying short descriptions of redirect targets Teichmüller–Tukey lemma (sometimes named Tukey's lemma) Bourbaki–Witt theorem – a choiceless fixed-point theorem that can be combined with choice can be used to prove Zorn's lemma == Notes == == References == Bourbaki, N (1970). Théorie des Ensembles. Hermann. Campbell, Paul J. (February 1978). "The Origin of 'Zorn's Lemma'". Historia Mathematica. 5 (1): 77–89. doi:10.1016/0315-0860(78)90136-2. Halmos, Paul (1960). Naive Set Theory. Princeton, New Jersey: D. Van Nostrand Company. Ciesielski, Krzysztof (1997). Set Theory for the Working Mathematician. Cambridge University Press. ISBN 978-0-521-59465-3. Jech, Thomas (2008) [1973]. The Axiom of Choice. Mineola, New York: Dover Publications. ISBN 978-0-486-46624-8. Moore, Gregory H. (2013) [1982]. Zermelo's axiom of choice: Its origins, development & influence. Dover Publications. ISBN 978-0-486-48841-7. == Further reading == The Zorn Identity at the n-category cafe. == External links == "Zorn lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Zorn's Lemma at ProvenMath contains a formal proof down to the finest detail of the equivalence of the axiom of choice and Zorn's Lemma. Zorn's Lemma at Metamath is another formal proof. (Unicode version for recent browsers.)
Wikipedia/Zorn's_lemma
Nuclear strategy involves the development of doctrines and strategies for the production and use of nuclear weapons. As a sub-branch of military strategy, nuclear strategy attempts to match nuclear weapons as means to political ends. In addition to the actual use of nuclear weapons whether in the battlefield or strategically, a large part of nuclear strategy involves their use as a bargaining tool. Some of the issues considered within nuclear strategy include: Conditions which serve a nation's interest to develop nuclear weapons Types of nuclear weapons to be developed How and when weapons are to be used Many strategists argue that nuclear strategy differs from other forms of military strategy. The immense and terrifying power of the weapons makes their use, in seeking victory in a traditional military sense, impossible. Perhaps counterintuitively, an important focus of nuclear strategy has been determining how to prevent and deter their use, a crucial part of mutually assured destruction. In the context of nuclear proliferation and maintaining the balance of power, states also seek to prevent other states from acquiring nuclear weapons as part of nuclear strategy. == Nuclear deterrent composition == The doctrine of mutual assured destruction (MAD) assumes that a nuclear deterrent force must be credible and survivable. That is, each deterrent force must survive a first strike with sufficient capability to effectively destroy the other country in a second strike. Therefore, a first strike would be suicidal for the launching country. In the late 1940s and 1950s as the Cold War developed, the United States and Soviet Union pursued multiple delivery methods and platforms to deliver nuclear weapons. Three types of platforms proved most successful and are collectively called a "nuclear triad". These are air-delivered weapons (bombs or missiles), ballistic missile submarines (usually nuclear-powered and called SSBNs), and intercontinental ballistic missiles (ICBMs), usually deployed in land-based hardened missile silos or on vehicles. Although not considered part of the deterrent forces, all of the nuclear powers deployed large numbers of tactical nuclear weapons in the Cold War. These could be delivered by virtually all platforms capable of delivering large conventional weapons. During the 1970s there was growing concern that the combined conventional forces of the Soviet Union and the Warsaw Pact could overwhelm the forces of NATO. It seemed unthinkable to respond to a Soviet/Warsaw Pact incursion into Western Europe with strategic nuclear weapons, inviting a catastrophic exchange. Thus, technologies were developed to greatly reduce collateral damage while being effective against advancing conventional military forces. Some of these were low-yield neutron bombs, which were lethal to tank crews, especially with tanks massed in tight formation, while producing relatively little blast, thermal radiation, or radioactive fallout. Other technologies were so-called "suppressed radiation devices," which produced mostly blast with little radioactivity, making them much like conventional explosives, but with much more energy. == See also == == Bibliography == === Early texts === Brodie, Bernard. The Absolute Weapon. Freeport, N.Y.: Books for Libraries Press, 1946. Brodie, Bernard. Strategy in the Missile Age. Princeton: Princeton University Press, 1959. Dunn, Lewis A. Deterrence Today – Roles, Challenges, and Responses Paris: IFRI Proliferation Papers n° 19, 2007. Kahn, Herman. On Thermonuclear War. 2nd ed. Princeton, N.J.: Princeton University Press, 1961. Kissinger, Henry A. Nuclear Weapons and Foreign Policy. New York: Harper, 1957. Schelling, Thomas C. Arms and Influence. New Haven: Yale University Press, 1966. Wohlstetter, Albert. "The Delicate Balance of Terror." Foreign Affairs 37, 211 (1958): 211–233. === Secondary literature === Baylis, John, and John Garnett. Makers of Nuclear Strategy. London: Pinter, 1991. ISBN 1-85567-025-9. Buzan, Barry, and Herring, Eric. "The Arms Dynamic in World Politics". London: Lynne Rienner Publishers, 1998. ISBN 1-55587-596-3. Freedman, Lawrence. The Evolution of Nuclear Strategy. 2nd ed. New York: St. Martin's Press, 1989. ISBN 0-333-97239-2 . Heuser, Beatrice. NATO, Britain, France and the FRG: Nuclear Strategies and Forces for Europe, 1949–2000 (London: Macmillan, hardback 1997, paperback 1999), 256p., ISBN 0-333-67365-4 Heuser, Beatrice. Nuclear Mentalities? Strategies and Belief Systems in Britain, France and the FRG (London: Macmillan, July 1998), 277p., Index, Tables. ISBN 0-333-69389-2 Heuser, Beatrice. "Victory in a Nuclear War? A Comparison of NATO and WTO War Aims and Strategies", Contemporary European History Vol. 7 Part 3 (November 1998), pp. 311–328. Heuser, Beatrice. "Warsaw Pact Military Doctrines in the 70s and 80s: Findings in the East German Archives", Comparative Strategy Vol. 12 No. 4 (Oct.–Dec. 1993), pp. 437–457. Kaplan, Fred M. The Wizards of Armageddon. New York: Simon and Schuster, 1983. ISBN 0-671-42444-0. Rai Chowdhuri, Satyabrata. Nuclear Politics: Towards A Safer World, Ilford: New Dawn Press, 2004. Rosenberg, David. "The Origins of Overkill: Nuclear Weapons and American Strategy, 1945–1960." International Security 7, 4 (Spring, 1983): 3–71. Schelling, Thomas C. The Strategy of Conflict. Cambridge: Harvard University Press, 1960. Smoke, Richard. National Security and the Nuclear Dilemma. 3rd ed. New York: McGraw–Hill, 1993. ISBN 0-07-059352-3. == References ==
Wikipedia/Nuclear_strategy
Evolutionary game theory (EGT) is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies. Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. This is influenced by the frequency of the competing strategies in the population. Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution. It has in turn become of interest to economists, sociologists, anthropologists, and philosophers. == History == === Classical game theory === Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players, all of whom have a choice of moves. Games can be a single round or repetitive. The approach a player takes in making their moves constitutes their strategy. Rules govern the outcome for the moves taken by the players, and outcomes produce payoffs for the players; rules and resulting payoffs can be expressed as decision trees or in a payoff matrix. Classical theory requires the players to make rational choices. Each player must consider the strategic analysis that their opponents are making to make their own choice of moves. === The problem of ritualized behaviour === Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation; "why are animals so 'gentlemanly or ladylike' in contests for resources?" The leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought, where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed. === Adapting game theory to evolutionary games === Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally—only that they have a strategy. The results of a game show how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs. The success of a strategy is determined by how good the strategy is in the presence of competing strategies (including itself), and of the frequency with which those strategies are used. Maynard Smith described his work in his book Evolution and the Theory of Games. Participants aim to produce as many replicas of themselves as they can, and the payoff is in units of fitness (relative worth in being able to reproduce). It is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation. The replicator dynamics models heredity but not mutation, and assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions. Results include the dynamics of changes in the population, the success of strategies, and any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy. == Evolutionary games == === Models === Evolutionary game theory encompasses Darwinian evolution, including competition (the game), natural selection (replicator dynamics), and heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, altruism, parental care, co-evolution, and ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models. The common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true. Some attractors (all global asymptotically stable fixed points) of the equations are evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this usually means such strategies are programmed and heavily influenced by genetics, thus making any player or organism's strategy determined by these biological factors. Evolutionary games are mathematical objects with different rules, payoffs, and mathematical behaviours. Each "game" represents different problems that organisms have to deal with, and the strategies they might adopt to survive and reproduce. Evolutionary games are often given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove, war of attrition, stag hunt, producer-scrounger, tragedy of the commons, and prisoner's dilemma. Strategies for these games include hawk, dove, bourgeois, prober, defector, assessor, and retaliator. The various strategies compete under the particular game's rules, and the mathematics are used to determine the results and behaviours. === Hawk dove === The first game that Maynard Smith analysed is the classic hawk dove game. It was conceived to analyse Lorenz and Tinbergen's problem, a contest over a shareable resource. The contestants can be either a hawk or a dove. These are two subtypes or morphs of one species with different strategies. The hawk first displays aggression, then escalates into a fight until it either wins or is injured (loses). The dove first displays aggression, but if faced with major escalation runs for safety. If not faced with such escalation, the dove attempts to share the resource. Given that the resource is given the value V, the damage from losing a fight is given cost C: If a hawk meets a dove, the hawk gets the full resource V If a hawk meets a hawk, half the time they win, half the time they lose, so the average outcome is then V/2 minus C/2 If a dove meets a hawk, the dove will back off and get nothing – 0 If a dove meets a dove, both share the resource and get V/2 The actual payoff, however, depends on the probability of meeting a hawk or dove, which in turn is a representation of the percentage of hawks and doves in the population when a particular contest takes place. That, in turn, is determined by the results of all of the previous contests. If the cost of losing C is greater than the value of winning V (the normal situation in the natural world) the mathematics ends in an evolutionarily stable strategy (ESS), a mix of the two strategies where the population of hawks is V/C. The population regresses to this equilibrium point if any new hawks or doves make a temporary perturbation in the population. The solution of the hawk dove game explains why most animal contests involve only ritual fighting behaviours in contests rather than outright battles. The result does not at all depend on "good of the species" behaviours as suggested by Lorenz, but solely on the implication of actions of so-called selfish genes. === War of attrition === In the hawk dove game the resource is shareable, which gives payoffs to both doves meeting in a pairwise contest. Where the resource is not shareable, but an alternative resource might be available by backing off and trying elsewhere, pure hawk or dove strategies are less effective. If an unshareable resource is combined with a high cost of losing a contest (injury or possible death) both hawk and dove payoffs are further diminished. A safer strategy of lower cost display, bluffing and waiting to win, is then viable – a bluffer strategy. The game then becomes one of accumulating costs, either the costs of displaying or the costs of prolonged unresolved engagement. It is effectively an auction; the winner is the contestant who will swallow the greater cost while the loser gets the same cost as the winner but no resource. The resulting evolutionary game theory mathematics lead to an optimal strategy of timed bluffing. This is because in the war of attrition any strategy that is unwavering and predictable is unstable, because it will ultimately be displaced by a mutant strategy which relies on the fact that it can best the existing predictable strategy by investing an extra small delta of waiting resource to ensure that it wins. Therefore, only a random unpredictable strategy can maintain itself in a population of bluffers. The contestants in effect choose an acceptable cost to be incurred related to the value of the resource being sought, effectively making a random bid as part of a mixed strategy (a strategy where a contestant has several, or even many, possible actions in their strategy). This implements a distribution of bids for a resource of specific value V, where the bid for any specific contest is chosen at random from that distribution. The distribution (an ESS) can be computed using the Bishop-Cannings theorem, which holds true for any mixed-strategy ESS. The distribution function in these contests was determined by Parker and Thompson to be: p ( x ) = e − x / V V . {\displaystyle p(x)={\frac {e^{-x/V}}{V}}.} The result is that the cumulative population of quitters for any particular cost m in this "mixed strategy" solution is: p ( m ) = 1 − e − m / V , {\displaystyle p(m)=1-e^{-m/V},} as shown in the adjacent graph. The intuitive sense that greater values of resource sought leads to greater waiting times is borne out. This is observed in nature, as in male dung flies contesting for mating sites, where the timing of disengagement in contests is as predicted by evolutionary theory mathematics. === Asymmetries that allow new strategies === In the war of attrition there must be nothing that signals the size of a bid to an opponent, otherwise the opponent can use the cue in an effective counter-strategy. There is however a mutant strategy which can better a bluffer in the war of attrition game if a suitable asymmetry exists, the bourgeois strategy. Bourgeois uses an asymmetry of some sort to break the deadlock. In nature one such asymmetry is possession of a resource. The strategy is to play a hawk if in possession of the resource, but to display then retreat if not in possession. This requires greater cognitive capability than hawk, but bourgeois is common in many animal contests, such as in contests among mantis shrimps and among speckled wood butterflies. === Social behaviour === Games like hawk dove and war of attrition represent pure competition between individuals and have no attendant social elements. Where social influences apply, competitors have four possible alternatives for strategic interaction. This is shown on the adjacent figure, where a plus sign represents a benefit and a minus sign represents a cost. In a cooperative or mutualistic relationship both "donor" and "recipient" are almost indistinguishable as both gain a benefit in the game by co-operating, i.e. the pair are in a game-wise situation where both can gain by executing a certain strategy, or alternatively both must act in concert because of some encompassing constraints that effectively puts them "in the same boat". In an altruistic relationship the donor, at a cost to themself provides a benefit to the recipient. In the general case the recipient will have a kin relationship to the donor and the donation is one-way. Behaviours where benefits are donated alternatively (in both directions) at a cost, are often called "altruistic", but on analysis such "altruism" can be seen to arise from optimised "selfish" strategies. Spite is essentially a “reversed” form of cooperation where neither party receives a tangible benefit. The general case is that the ally is kin related and the benefit is an easier competitive environment for the ally. Note: George Price, one of the early mathematical modellers of both altruism and spite, found this equivalence particularly disturbing at an emotional level. Selfishness is the base criteria of all strategic choice from a game theory perspective – strategies not aimed at self-survival and self-replication are not long for any game. Critically however, this situation is impacted by the fact that competition is taking place on multiple levels – i.e. at a genetic, an individual and a group level. == Contests of selfish genes == At first glance it may appear that the contestants of evolutionary games are the individuals present in each generation who directly participate in the game. But individuals live only through one game cycle, and instead it is the strategies that really contest with one another over the duration of these many-generation games. So it is ultimately genes that play out a full contest – selfish genes of strategy. The contesting genes are present in an individual and to a degree in all of the individual's kin. This can sometimes profoundly affect which strategies survive, especially with issues of cooperation and defection. William Hamilton, known for his theory of kin selection, explored many of these cases using game-theoretic models. Kin-related treatment of game contests helps to explain many aspects of the behaviour of social insects, the altruistic behaviour in parent-offspring interactions, mutual protection behaviours, and co-operative care of offspring. For such games, Hamilton defined an extended form of fitness – inclusive fitness, which includes an individual's offspring as well as any offspring equivalents found in kin. Hamilton went beyond kin relatedness to work with Robert Axelrod, analysing games of co-operation under conditions not involving kin where reciprocal altruism came into play. === Eusociality and kin selection === Eusocial insect workers forfeit reproductive rights to their queen. It has been suggested that kin selection, based on the genetic makeup of these workers, may predispose them to altruistic behaviours. Most eusocial insect societies have haplodiploid sexual determination, which means that workers are unusually closely related. This explanation of insect eusociality has, however, been challenged by a few highly-noted evolutionary game theorists (Nowak and Wilson) who have published a controversial alternative game theoretic explanation based on a sequential development and group selection effects proposed for these insect species. === Prisoner's dilemma === A difficulty of the theory of evolution, recognised by Darwin himself, was the problem of altruism. If the basis for selection is at an individual level, altruism makes no sense at all. But universal selection at the group level (for the good of the species, not the individual) fails to pass the test of the mathematics of game theory and is certainly not the general case in nature. Yet in many social animals, altruistic behaviour exists. The solution to this problem can be found in the application of evolutionary game theory to the prisoner's dilemma game – a game which tests the payoffs of cooperating or in defecting from cooperation. It is the most studied game in all of game theory. The analysis of the prisoner's dilemma is as a repetitive game. This affords competitors the possibility of retaliating for defection in previous rounds of the game. Many strategies have been tested; the best competitive strategies are general cooperation, with a reserved retaliatory response if necessary. The most famous and one of the most successful of these is tit-for-tat with a simple algorithm. The pay-off for any single round of the game is defined by the pay-off matrix for a single round game (shown in bar chart 1 below). In multi-round games the different choices – co-operate or defect – can be made in any particular round, resulting in a certain round payoff. It is, however, the possible accumulated pay-offs over the multiple rounds that count in shaping the overall pay-offs for differing multi-round strategies such as tit-for-tat. Example 1: The straightforward single round prisoner's dilemma game. The classic prisoner's dilemma game payoffs gives a player a maximum payoff if they defect and their partner co-operates (this choice is known as temptation). If, however, the player co-operates and their partner defects, they get the worst possible result (the suckers payoff). In these payoff conditions the best choice (a Nash equilibrium) is to defect. Example 2: Prisoner's dilemma played repeatedly. The strategy employed is tit-for-tat which alters behaviours based on the action taken by a partner in the previous round – i.e. reward co-operation and punish defection. The effect of this strategy in accumulated payoff over many rounds is to produce a higher payoff for both players' co-operation and a lower payoff for defection. This removes the temptation to defect. The suckers payoff also becomes less, although "invasion" by a pure defection strategy is not entirely eliminated. === Routes to altruism === Altruism takes place when one individual, at a cost (C) to itself, exercises a strategy that provides a benefit (B) to another individual. The cost may consist of a loss of capability or resource which helps in the battle for survival and reproduction, or an added risk to its own survival. Altruism strategies can arise through: === The evolutionarily stable strategy === The evolutionarily stable strategy (ESS) is akin to the Nash equilibrium in classical game theory, but with mathematically extended criteria. Nash equilibrium is a game equilibrium where it is not rational for any player to deviate from their present strategy, provided that the others adhere to their strategies. An ESS is a state of game dynamics where, in a very large population of competitors, another mutant strategy cannot successfully enter the population to disturb the existing dynamic (which itself depends on the population mix). Therefore, a successful strategy (with an ESS) must be both effective against competitors when it is rare – to enter the previous competing population, and successful when later in high proportion in the population – to defend itself. This in turn means that the strategy must be successful when it contends with others exactly like itself. An ESS is not: An optimal strategy: that would maximize fitness, and many ESS states are far below the maximum fitness achievable in a fitness landscape. (See hawk dove graph above as an example of this.) A singular solution: often several ESS conditions can exist in a competitive situation. A particular contest might stabilize into any one of these possibilities, but later a major perturbation in conditions can move the solution into one of the alternative ESS states. Always present: it is possible for there to be no ESS. An evolutionary game with no ESS is "rock-scissors-paper", as found in species such as the side-blotched lizard (Uta stansburiana). An unbeatable strategy: the ESS is only an uninvadeable strategy. The ESS state can be solved for by exploring either the dynamics of population change to determine an ESS, or by solving equations for the stable stationary point conditions which define an ESS. For example, in the hawk dove game we can look for whether there is a static population mix condition where the fitness of doves will be exactly the same as fitness of hawks (therefore both having equivalent growth rates – a static point). Let the chance of meeting a hawk=p so therefore the chance of meeting a dove is (1-p) Let Whawk equal the payoff for hawk... Whawk=payoff in the chance of meeting a dove + payoff in the chance of meeting a hawk Taking the payoff matrix results and plugging them into the above equation: Whawk= V·(1-p)+(V/2-C/2)·p Similarly for a dove: Wdove= V/2·(1-p)+0·(p) so.... Wdove= V/2·(1-p) Equating the two fitnesses, hawk and dove V·(1-p)+(V/2-C/2)·p= V/2·(1-p) ... and solving for p p= V/C so for this "static point" where the population percent is an ESS solves to be ESS(percent Hawk)=V/C Similarly, using inequalities, it can be shown that an additional hawk or dove mutant entering this ESS state eventually results in less fitness for their kind – both a true Nash and an ESS equilibrium. This example shows that when the risks of contest injury or death (the cost C) is significantly greater than the potential reward (the benefit value V), the stable population will be mixed between aggressors and doves, and the proportion of doves will exceed that of the aggressors. This explains behaviours observed in nature. == Unstable games, cyclic patterns == === Rock paper scissors === Rock paper scissors incorporated into an evolutionary game has been used for modelling natural processes in the study of ecology. Using experimental economics methods, scientists have used RPS games to test human social evolutionary dynamical behaviours in laboratories. The social cyclic behaviours, predicted by evolutionary game theory, have been observed in various laboratory experiments. === Side-blotched lizard plays the RPS, and other cyclical games === The first example of RPS in nature was seen in the behaviours and throat colours of a small lizard of western North America. The side-blotched lizard (Uta stansburiana) is polymorphic with three throat-colour morphs that each pursue a different mating strategy: The orange throat is very aggressive and operates over a large territory – attempting to mate with numerous females The unaggressive yellow throat mimics the markings and behavior of female lizards, and "sneakily" slips into the orange throat's territory to mate with the females there (thereby taking over the population) The blue throat mates with, and carefully guards, one female – making it impossible for the sneakers to succeed and therefore overtakes their place in a population However the blue throats cannot overcome the more aggressive orange throats. Later work showed that the blue males are altruistic to other blue males, with three key traits: they signal with blue color, they recognize and settle next to other (unrelated) blue males, and they will even defend their partner against orange, to the death. This is the hallmark of another game of cooperation that involves a green-beard effect. The females in the same population have the same throat colours, and this affects how many offspring they produce and the size of the progeny, which generates cycles in density, yet another game – the r-K game. Here, r is the Malthusian parameter governing exponential growth, and K is the carrying capacity of the environment. Orange females have larger clutches and smaller offspring which do well at low density. Yellow & blue females have smaller clutches and larger offspring which do well at high density. This generates perpetual cycles tightly tied to population density. The idea of cycles due to density regulation of two strategies originated with rodent researcher Dennis Chitty, ergo these kinds of games lead to "Chitty cycles". There are games within games within games embedded in natural populations. These drive RPS cycles in the males with a periodicity of four years and r-K cycles in females with a two year period. The overall situation corresponds to the rock, scissors, paper game, creating a four-year population cycle. The RPS game in male side-blotched lizards does not have an ESS, but it has a Nash equilibrium (NE) with endless orbits around the NE attractor. Following this Side-blotched lizard research, many other three-strategy polymorphisms have been discovered in lizards and some of these have RPS dynamics merging the male game and density regulation game in a single sex (males). More recently, mammals have been shown to harbour the same RPS game in males and r-K game in females, with coat-colour polymorphisms and behaviours that drive cycles. This game is also linked to the evolution of male care in rodents, and monogamy, and drives speciation rates. There are r-K strategy games linked to rodent population cycles (and lizard cycles). When he read that these lizards were essentially engaged in a game with a rock-paper-scissors structure, John Maynard Smith is said to have exclaimed "They have read my book!". == Signalling, sexual selection and the handicap principle == Aside from the difficulty of explaining how altruism exists in many evolved organisms, Darwin was also bothered by a second conundrum – why a significant number of species have phenotypical attributes that are patently disadvantageous to them with respect to their survival – and should by the process of natural section be selected against – e.g. the massive inconvenient feather structure found in a peacock's tail. Regarding this issue Darwin wrote to a colleague "The sight of a feather in a peacock's tail, whenever I gaze at it, makes me sick." It is the mathematics of evolutionary game theory, which has not only explained the existence of altruism, but also explains the totally counterintuitive existence of the peacock's tail and other such biological encumbrances. On analysis, problems of biological life are not at all unlike the problems that define economics – eating (akin to resource acquisition and management), survival (competitive strategy) and reproduction (investment, risk and return). Game theory was originally conceived as a mathematical analysis of economic processes and indeed this is why it has proven so useful in explaining so many biological behaviours. One important further refinement of the evolutionary game theory model that has economic overtones rests on the analysis of costs. A simple model of cost assumes that all competitors suffer the same penalty imposed by the game costs, but this is not the case. More successful players will be endowed with or will have accumulated a higher "wealth reserve" or "affordability" than less-successful players. This wealth effect in evolutionary game theory is represented mathematically by "resource holding potential (RHP)" and shows that the effective cost to a competitor with a higher RHP are not as great as for a competitor with a lower RHP. As a higher RHP individual is a more desirable mate in producing potentially successful offspring, it is only logical that with sexual selection RHP should have evolved to be signalled in some way by the competing rivals, and for this to work this signalling must be done honestly. Amotz Zahavi has developed this thinking in what is known as the "handicap principle", where superior competitors signal their superiority by a costly display. As higher RHP individuals can properly afford such a costly display this signalling is inherently honest, and can be taken as such by the signal receiver. In nature this is illustrated than in the costly plumage of the peacock. The mathematical proof of the handicap principle was developed by Alan Grafen using evolutionary game-theoretic modelling. == Coevolution == Two types of dynamics: Evolutionary games which lead to a stable situation or point of stasis for contending strategies which result in an evolutionarily stable strategy Evolutionary games which exhibit a cyclic behaviour (as with RPS game) where the proportions of contending strategies continuously cycle over time within the overall population A third, coevolutionary, dynamic, combines intra-specific and inter-specific competition. Examples include predator-prey competition and host-parasite co-evolution, as well as mutualism. Evolutionary game models have been created for pairwise and multi-species coevolutionary systems. The general dynamic differs between competitive systems and mutualistic systems. In competitive (non-mutualistic) inter-species coevolutionary system the species are involved in an arms race – where adaptations that are better at competing against the other species tend to be preserved. Both game payoffs and replicator dynamics reflect this. This leads to a Red Queen dynamic where the protagonists must "run as fast as they can to just stay in one place". A number of evolutionary game theory models have been produced to encompass coevolutionary situations. A key factor applicable in these coevolutionary systems is the continuous adaptation of strategy in such arms races. Coevolutionary modelling therefore often includes genetic algorithms to reflect mutational effects, while computers simulate the dynamics of the overall coevolutionary game. The resulting dynamics are studied as various parameters are modified. Because several variables are simultaneously at play, solutions become the province of multi-variable optimisation. The mathematical criteria of determining stable points are Pareto efficiency and Pareto dominance, a measure of solution optimality peaks in multivariable systems. Carl Bergstrom and Michael Lachmann apply evolutionary game theory to the division of benefits in mutualistic interactions between organisms. Darwinian assumptions about fitness are modeled using replicator dynamics to show that the organism evolving at a slower rate in a mutualistic relationship gains a disproportionately high share of the benefits or payoffs. == Extending the model == A mathematical model analysing the behaviour of a system needs initially to be as simple as possible to aid in developing a base understanding the fundamentals, or “first order effects”, pertaining to what is being studied. With this understanding in place it is then appropriate to see if other, more subtle, parameters (second order effects) further impact the primary behaviours or shape additional behaviours in the system. Following Maynard Smith's seminal work in evolutionary game theory, the subject has had a number of very significant extensions which have shed more light on understanding evolutionary dynamics, particularly in the area of altruistic behaviors. Some of these key extensions to evolutionary game theory are: === Spatial games === Geographic factors in evolution include gene flow and horizontal gene transfer. Spatial game models represent geometry by putting contestants in a lattice of cells: contests take place only with immediate neighbours. Winning strategies take over these immediate neighbourhoods and then interact with adjacent neighbourhoods. This model is useful in showing how pockets of co-operators can invade and introduce altruism in the Prisoners Dilemma game, where Tit for Tat (TFT) is a Nash Equilibrium but NOT also an ESS. Spatial structure is sometimes abstracted into a general network of interactions. This is the foundation of evolutionary graph theory. === Effects of having information === In evolutionary game theory as in conventional game theory the effect of Signalling (the acquisition of information) is of critical importance, as in Indirect Reciprocity in Prisoners Dilemma (where contests between the SAME paired individuals are NOT repetitive). This models the reality of most normal social interactions which are non-kin related. Unless a probability measure of reputation is available in Prisoners Dilemma only direct reciprocity can be achieved. With this information indirect reciprocity is also supported. Alternatively, agents might have access to an arbitrary signal initially uncorrelated to strategy but becomes correlated due to evolutionary dynamics. This is the green-beard effect (see side-blotched lizards, above) or evolution of ethnocentrism in humans. Depending on the game, it can allow the evolution of either cooperation or irrational hostility. From molecular to multicellular level, a signaling game model with information asymmetry between sender and receiver might be appropriate, such as in mate attraction or evolution of translation machinery from RNA strings. === Finite populations === Many evolutionary games have been modelled in finite populations to see the effect this may have, for example in the success of mixed strategies. == See also == == Notes == == References == == Further reading == Davis, Morton; "Game Theory – A Nontechnical Introduction", Dover Books, ISBN 0-486-29672-5 Dawkins, Richard (2006). The Selfish Gene (30th anniversary ed.). Oxford: Oxford University Press. ISBN 978-0-19-929115-1. Dugatkin and Reeve; "Game Theory and Animal Behavior", Oxford University Press, ISBN 0-19-513790-6 Hofbauer and Sigmund; "Evolutionary Games and Population Dynamics", Cambridge University Press, ISBN 0-521-62570-X Kohn, Marek; "A Reason for Everything", Faber and Faber, ISBN 0-571-22393-1 Li Richter and Lehtonen (Eds.) "Half a century of evolutionary games: a synthesis of theory, application and future directions", Philosophical Transactions of the Royal Society B, Volume 378, Issue 1876 Sandholm, William H.; "Population Games and Evolutionary Dynamics", The MIT Press, ISBN 0262195879 Segerstrale, Ullica; "Nature's Oracle – The life and work of W.D. Hamilton", Oxford University Press, 2013, ISBN 978-0-19-860727-4 Sigmund, Karl; "Games of Life", Penguin Books, also Oxford University Press, 1993, ISBN 0198547838 Vincent and Brown; "Evolutionary Game Theory, Natural Selection and Darwinian Dynamics", Cambridge University Press, ISBN 0-521-84170-4 == External links == Theme issue 'Half a century of evolutionary games: a synthesis of theory, application and future directions' (2023) Evolutionary game theory at the Stanford Encyclopedia of Philosophy Evolving Artificial Moral Ecologies at The Centre for Applied Ethics, University of British Columbia "Life and work of John Maynard Smith, interviewed by Richard Dawkins". Web of Stories. 1997 – via YouTube. (via Web of Stories)
Wikipedia/Evolutionary_Game_Theory
Price controls are restrictions set in place and enforced by governments, on the prices that can be charged for goods and services in a market. The intent behind implementing such controls can stem from the desire to maintain affordability of goods even during shortages, and to slow inflation, or alternatively to ensure a minimum income for providers of certain goods or to try to achieve a living wage. There are two primary forms of price control: a price ceiling, the maximum price that can be charged; and a price floor, the minimum price that can be charged. A well-known example of a price ceiling is rent control, which limits the increases that a landlord is permitted by government to charge for rent. A widely used price floor is minimum wage (wages are the price of labor). Historically, price controls have often been imposed as part of a larger incomes policy package also employing wage controls and other regulatory elements. Although price controls are routinely used by governments, Western economists generally agree that consumer price controls do not accomplish what they intend to in market economies, and many economists instead recommend such controls should be avoided; however, since the credibility revolution started in the 1990s, minimum wages have found strong support among some economists. == History == The Roman Emperor Diocletian tried to set maximum prices for all commodities in the late 3rd century AD but with little success. In the early 14th century, the Delhi Sultanate ruler Alauddin Khalji instituted several market reforms, which included price-fixing for a wide range of goods, including grains, cloth, slaves and animals. However, a few months after his death, these measures were revoked by his son Qutbuddin Mubarak Shah. During the French Revolution, the Law of the Maximum set price limits on the sale of food and other staples. Within Spain in the 16th and 17th centuries, after the price revolution, a permanent regulation on the price of wheat (called tasa del trigo) was established. This intervention was discussed by theologians and jurists of this time. Governments in planned economies typically control prices on most or all goods but have not sustained high economic performance and have been almost entirely replaced by mixed economies. Price controls have also been used in modern times in less-planned economies, such as rent control. During World War I, the United States Food Administration enforced price controls on food. Price controls were also imposed in the US and Nazi Germany during World War II. === Postwar === Wage controls have been tried in many countries to reduce inflation, seldom with success. Since inflation can be caused by both aggregate supply or demand, wage controls can fail as a result of supply shocks or excessive stimulus during times of high sovereign debt (increases to the Monetary Aggregate System M2). ==== United Kingdom ==== The National Board for Prices and Incomes was created by the government of Harold Wilson in 1965 in an attempt to solve the problem of inflation in the British economy by managing wages and prices. The Prices and Incomes Act 1966 c. 33 affected UK labour law, regarding wage levels and price policies. It allowed the government to begin a process to scrutinise rising levels of wages (then around 8% per year) by initiating reports and inquiries and ultimately giving orders for a standstill. The objective was to control inflation. It proved unpopular after the 1960s. ==== United States ==== In the United States, price controls have been enacted several times. The first time price controls were enacted nationally was in 1906 as a part of the Hepburn Act. In World War I the War Industries Board was established to set priorities, fix prices, and standardize products to support the war efforts of the United States. During the 1930s, the National Industrial Recovery Act (NIRA) created the National Recovery Administration, that set prices and created codes of "fair practices". In May 1935, the Supreme Court held that the mandatory codes section of NIRA were unconstitutional, in the court case of Schechter Poultry Corp. v. United States. During World War II, the Office of Price Administration handled price controls. During the Korean War, the Economic Stabilization Agency instituted price controls. In 1971, President Richard Nixon issued Executive Order 11615 (pursuant to the Economic Stabilization Act of 1970), imposing a 90-day freeze on wages and prices. The constitutionality of this action was challenged and upheld in the case of Amalgamated Meat Cutters v. Connally. The individual states have sometimes chosen to implement their own control policies. In the 1860s, several midwestern states of the United States, namely Minnesota, Iowa, Wisconsin, and Illinois, enacted a series of laws called the Granger Laws, primarily to regulate rising fare prices of railroad and grain elevator companies. The state of Hawaii briefly introduced a cap on the wholesale price of gasoline (the Gas Cap Law) in an effort to fight "price gouging" in that state in 2005. Because it was widely seen as too soft and ineffective, it was repealed shortly thereafter. ==== Venezuela ==== According to Girish Gupta from The Guardian, price controls have created a scarcity of basic goods and made black markets flourish under President Nicolás Maduro. ==== India ==== In India, the government first enacted price controls in 2013 for the Drug Price Control Order (DPCO). This order gave the local regulatory body and the Pharmaceutical Pricing Authority the power to set ceiling prices on the National List of Essential medicines. ==== Sri Lanka ==== In Sri Lanka, the Consumer Affairs Authority has the power to set the Maximum Retail Price (MRP) for goods specified by the government as essential commodities. In 2021 the Sri Lankan government enacted price controls on several essential items resulting in shortages. == Price floor == A price floor is a government- or group-imposed price control or limit on how low a price can be charged for a product, good, commodity, or service. A price floor must be higher than the equilibrium price in order to be effective. The equilibrium price, commonly called the "market price", is the price where economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change, often described as the point at which quantity demanded and quantity supplied are equal (in a perfectly competitive market). Governments use price floors to keep certain prices from going too low. Two common price floors are minimum wage laws and supply management in Canadian agriculture. Other price floors include regulated US airfares prior to 1978 and minimum price per-drink laws for alcohol. Since the credibility revolution starting in the 1990s, minimum wages have often found strong support among economists. Advantages of a price floor are: May motivate producers to produce more. May prevent the fluctuation of prices of agricultural products. May reduce the over-exploitation of producers. May reduce poverty and increase productivity among employees (in minimum wages) Disadvantages of a price floor are: Supply may exceed demand. Resources may be wasted. The government may be forced to buy the excess supply or it may be discarded (e.g., in an agricultural context). == Price ceiling == A related government intervention to price floor, which is also a price control, is the price ceiling; it sets the maximum price that can legally be charged for a good or service, with a common example being rent control. A price ceiling is a price control, or limit, on how high a price is charged for a product, commodity, or service. Governments use price ceilings to protect consumers from conditions that could make commodities prohibitively expensive. Such conditions can occur during periods of high inflation, in the event of an investment bubble, or in the event of monopoly ownership of a product, all of which can cause problems if imposed for a long period without controlled rationing, leading to shortages. Further problems can occur if a government sets unrealistic price ceilings, causing business failures, stock crashes, or even economic crises. In fully unregulated market economies, price ceilings do not exist. While price ceilings are often imposed by governments, there are also price ceilings that are implemented by non-governmental organizations such as companies, such as the practice of resale price maintenance. With resale price maintenance, a manufacturer and its distributors agree that the distributors will sell the manufacturer's product at certain prices (resale price maintenance), at or below a price ceiling (maximum resale price maintenance) or at or above a price floor. == Criticism == The primary criticism leveled against the price ceiling type of price controls is that by keeping prices artificially low, demand is increased to the point where supply cannot keep up, leading to shortages in the price-controlled product. For example, Lactantius wrote that Diocletian "by various taxes, he had made all things exceedingly expensive, attempted by a law to limit their prices. Then much blood [of merchants] was shed for trifles, men were afraid to offer anything for sale, and the scarcity became more excessive and grievous than ever. Until, in the end, the [price limit] law, after having proved destructive to many people, was from mere necessity abolished." As with Diocletian's Edict on Maximum Prices, shortages lead to black markets where prices for the same good exceed those of an uncontrolled market. Furthermore, once controls are removed, prices will immediately increase, which can temporarily shock the economic system. Black markets flourish in most countries during wartime. States that are engaged in total war or other large-scale, extended wars often impose restrictions on home use of critical resources that are needed for the war effort, such as food, gasoline, rubber, metal, etc., typically through rationing. In most cases, a black market develops to supply rationed goods at exorbitant prices. The rationing and price controls enforced in many countries during World War II encouraged widespread black market activity. One source of black-market meat under wartime rationing was by farmers declaring fewer domestic animal births to the Ministry of Food than actually happened. Another in Britain was supplies from the US, intended only for use in US army bases on British land, but leaked into the local native British black market. A classic example of how price controls cause shortages was during the Arab oil embargo between October 19, 1973, and March 17, 1974. Long lines of cars and trucks quickly appeared at retail gas stations in the U.S. and some stations closed because of a shortage of fuel at the low price set by the U.S. Cost of Living Council. The fixed price was below what the market would otherwise bear and, as a result, the inventory disappeared. It made no difference whether prices were voluntarily or involuntarily posted below the market clearing price. Scarcity resulted in either case. Price controls have been criticised as failing to achieve their proximate aim by some opponents of the policy or strategy, which is enacted to reduce prices paid by retail consumers, but such controls in certain circumstances may cause side effects which can reduce supply. Nobel Memorial Prize winner Milton Friedman said, "We economists don't know much, but we do know how to create a shortage. If you want to create a shortage of tomatoes, for example, just pass a law that retailers can't sell tomatoes for more than two cents per pound. Instantly you'll have a tomato shortage. It's the same with oil or gas." U.S. President Richard Nixon's Secretary of the Treasury, George Shultz, enacting Nixon's "New Economic Policy", lifted price controls that had begun in 1971 (part of the "Nixon Shock"). This lifting of price controls resulted in a rapid increase in prices. Price freezes were re-established five months later. Stagflation was eventually ended in the United States when the Federal Reserve under chairman Paul Volcker raised interest rates to unusually high levels. This successfully ended high inflation but caused a recession that ended in the early 1980s. == See also == Administered prices Capital control Council on Wage and Price Stability Global energy crisis (2021–2023) Maximum retail price Monopsony Price ceiling == References == == Further reading == Boudreaux, Donald (2008). "Price Controls". In Hamowy, Ronald (ed.). The Encyclopedia of Libertarianism. Thousand Oaks, CA: Sage; Cato Institute. pp. 389–390. doi:10.4135/9781412965811.n241. ISBN 978-1412965804. LCCN 2008009151. OCLC 750831024. Dworczak, Piotr; Kominers, Scott Duke; Akbarpour, Mohammad (2021). "Redistribution Through Markets". Econometrica. 89 (4): 1665–1698.
Wikipedia/Price_controls
In game theory, the graphical form or graphical game is an alternate compact representation of strategic interactions that efficiently models situations where players' outcomes depend only on a subset of other players. First formalized by Michael Kearns, Michael Littman, and Satinder Singh in 2001, this approach complements traditional representations such as the normal form and extensive form by leveraging concepts from graph theory to achieve more concise game descriptions. In a graphical game representation, players are depicted as nodes in a graph, with edges connecting players whose decisions directly affect each other. Each player's utility function depends only on their own strategy and the strategies of their immediate neighbors in the graph, rather than on all players' actions. This framework is particularly valuable for modeling social network interactions, economic networks, and localized competitive scenarios where players primarily respond to those in their immediate vicinity. The graphical approach offers significant advantages when representing large games with limited interaction patterns, as it can exponentially reduce the amount of information needed to fully describe the game. This compact representation facilitates more efficient computational analysis for complex multi-agent systems across fields such as artificial intelligence, economics, and network science. == Formal definition == A graphical game is represented by a graph G {\displaystyle G} , in which each player is represented by a node, and there is an edge between two nodes i {\displaystyle i} and j {\displaystyle j} iff their utility functions are dependent on the strategy which the other player will choose. Each node i {\displaystyle i} in G {\displaystyle G} has a function u i : { 1 … m } d i + 1 → R {\displaystyle u_{i}:\{1\ldots m\}^{d_{i}+1}\rightarrow \mathbb {R} } , where d i {\displaystyle d_{i}} is the degree of vertex i {\displaystyle i} . u i {\displaystyle u_{i}} specifies the utility of player i {\displaystyle i} as a function of his strategy as well as those of his neighbors. == The size of the game's representation == For a general n {\displaystyle n} players game, in which each player has m {\displaystyle m} possible strategies, the size of a normal form representation would be O ( m n ) {\displaystyle O(m^{n})} . The size of the graphical representation for this game is O ( m d ) {\displaystyle O(m^{d})} where d {\displaystyle d} is the maximal node degree in the graph. If d ≪ n {\displaystyle d\ll n} , then the graphical game representation is much smaller. == An example == In case where each player's utility function depends only on one other player: The maximal degree of the graph is 1, and the game can be described as n {\displaystyle n} functions (tables) of size m 2 {\displaystyle m^{2}} . So, the total size of the input will be n m 2 {\displaystyle nm^{2}} . == Nash equilibrium == Finding Nash equilibrium in a game takes exponential time in the size of the representation. If the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the maximal degree of a node is 3 or more, the problem is NP-complete. == References == Michael Kearns (2007) "Graphical Games". In Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007). Algorithmic Game Theory (PDF). Cambridge, UK: Cambridge University Press. ISBN 0-521-87282-0. Michael Kearns, Michael L. Littman and Satinder Singh (2001) "Graphical Models for Game Theory".
Wikipedia/Graphical_game_theory
In mechanism design, monotonicity is a property of a social choice function. It is a necessary condition for being able to implement such a function using a strategyproof mechanism. Its verbal description is: If changing one agent's type (while keeping the types of other agents fixed) changes the outcome under the social choice function, then the resulting difference in utilities of the new and original outcomes evaluated at the new type of this agent must be at least as much as this difference in utilities evaluated at the original type of this agent. In other words:: 227  If the social choice changes when a single player changes his valuation, then it must be because the player increased his value of the new choice relative to his value of the old choice. == Notation == There is a set X {\displaystyle X} of possible outcomes. There are n {\displaystyle n} agents which have different valuations for each outcome. The valuation of agent i {\displaystyle i} is represented as a function: v i : X ⟶ R + {\displaystyle v_{i}:X\longrightarrow R_{+}} which expresses the value it assigns to each alternative. The vector of all value-functions is denoted by v {\displaystyle v} . For every agent i {\displaystyle i} , the vector of all value-functions of the other agents is denoted by v − i {\displaystyle v_{-i}} . So v ≡ ( v i , v − i ) {\displaystyle v\equiv (v_{i},v_{-i})} . A social choice function is a function that takes as input the value-vector v {\displaystyle v} and returns an outcome x ∈ X {\displaystyle x\in X} . It is denoted by Outcome ( v ) {\displaystyle {\text{Outcome}}(v)} or Outcome ( v i , v − i ) {\displaystyle {\text{Outcome}}(v_{i},v_{-i})} . == In mechanisms without money == A social choice function satisfies the strong monotonicity property (SMON) if for every agent i {\displaystyle i} and every v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} , if: x = Outcome ( v i , v − i ) {\displaystyle x={\text{Outcome}}(v_{i},v_{-i})} x ′ = Outcome ( v i ′ , v − i ) {\displaystyle x'={\text{Outcome}}(v'_{i},v_{-i})} then: x ⪰ i x ′ {\displaystyle x\succeq _{i}x'} (by the initial preferences, the agent prefers the initial outcome). x ⪯ i ′ x ′ {\displaystyle x\preceq _{i'}x'} (by the final preferences, the agent prefers the final outcome). Or equivalently: v i ( x ) − v i ( x ′ ) ≥ 0 {\displaystyle v_{i}(x)-v_{i}(x')\geq 0} v i ′ ( x ) − v i ′ ( x ′ ) ≤ 0 {\displaystyle v_{i}'(x)-v_{i}'(x')\leq 0} === Necessity === If there exists a strategyproof mechanism without money, with an outcome function O u t c o m e {\displaystyle Outcome} , then this function must be SMON. PROOF: Fix some agent i {\displaystyle i} and some valuation vector v − i {\displaystyle v_{-i}} . Strategyproofness means that an agent with real valuation v i {\displaystyle v_{i}} weakly prefers to declare v i {\displaystyle v_{i}} than to lie and declare v i ′ {\displaystyle v_{i}'} ; hence: v i ( x ) ≥ v i ( x ′ ) {\displaystyle v_{i}(x)\geq v_{i}(x')} Similarly, an agent with real valuation v i ′ {\displaystyle v_{i}'} weakly prefers to declare v i ′ {\displaystyle v_{i}'} than to lie and declare v i {\displaystyle v_{i}} ; hence: v i ′ ( x ′ ) ≥ v i ′ ( x ) {\displaystyle v_{i}'(x')\geq v_{i}'(x)} == In mechanisms with money == When the mechanism is allowed to use money, the SMON property is no longer necessary for implementability, since the mechanism can switch to an alternative which is less preferable for an agent and compensate that agent with money. A social choice function satisfies the weak monotonicity property (WMON) if for every agent i {\displaystyle i} and every v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} , if: x = Outcome ( v i , v − i ) {\displaystyle x={\text{Outcome}}(v_{i},v_{-i})} x ′ = Outcome ( v i ′ , v − i ) {\displaystyle x'={\text{Outcome}}(v'_{i},v_{-i})} then: v i ( x ) − v i ( x ′ ) ≥ v i ′ ( x ) − v i ′ ( x ′ ) {\displaystyle v_{i}(x)-v_{i}(x')\geq v_{i}'(x)-v_{i}'(x')} === Necessity === If there exists a strategyproof mechanism with an outcome function Outcome {\displaystyle {\text{Outcome}}} , then this function must be WMON. PROOF:: 227  Fix some agent i {\displaystyle i} and some valuation vector v − i {\displaystyle v_{-i}} . A strategyproof mechanism has a price function P r i c e i ( x , v − i ) {\displaystyle Price_{i}(x,v_{-i})} , that determines how much payment agent i {\displaystyle i} receives when the outcome of the mechanism is x {\displaystyle x} ; this price depends on the outcome but must not depend directly on v i {\displaystyle v_{i}} . Strategyproofness means that a player with valuation v i {\displaystyle v_{i}} weakly prefers to declare v i {\displaystyle v_{i}} over declaring v i ′ {\displaystyle v_{i}'} ; hence: v i ( x ) + Price i ( x , v − i ) ≥ v i ( x ′ ) + Price i ( x ′ , v − i ) {\displaystyle v_{i}(x)+{\text{Price}}_{i}(x,v_{-i})\geq v_{i}(x')+{\text{Price}}_{i}(x',v_{-i})} Similarly, a player with valuation v i ′ {\displaystyle v_{i}'} weakly prefers to declare v i ′ {\displaystyle v_{i}'} over declaring v i {\displaystyle v_{i}} ; hence: v i ′ ( x ) + Price i ( x , v − i ) ≤ v i ′ ( x ′ ) + Price i ( x ′ , v − i ) {\displaystyle v_{i}'(x)+{\text{Price}}_{i}(x,v_{-i})\leq v_{i}'(x')+{\text{Price}}_{i}(x',v_{-i})} Subtracting the second inequality from the first gives the WMON property. === Sufficiency === Monotonicity is not always a sufficient condition for implementability, but there are some important cases in it is sufficient (i.e, every WMON social-choice function can be implemented): When the agents have single-parameter utility functions. In many convex domains, most notably when the range of each value-function is R + {\displaystyle \mathbb {R} ^{+}} . When the range of each value-function is R {\displaystyle \mathbb {R} } , or a cube (Gui, Müller, and Vohra (2004)). In any convex domain (Saks and Yu (2005)). In any domain with a convex closure. In any "monotonicity domain". == Examples == When agents have single peaked preferences, the median social-choice function (selecting the median among the outcomes that are best for the agents) is strongly monotonic. Indeed, the mechanism selecting the median vote is a truthful mechanism without money. See median voting rule. When agents have general preferences represented by cardinal utility functions, the utilitarian social-choice function (selecting the outcome that maximizes the sum of the agents' valuations) is not strongly-monotonic but it is weakly monotonic. Indeed, it can be implemented by the VCG mechanism, which is a truthful mechanism with money. In job-scheduling, the makespan-minimization social-choice function is not strongly-monotonic nor weakly-monotonic. Indeed, it cannot be implemented by a truthful mechanism; see truthful job scheduling. == See also == The monotonicity criterion in voting systems. Maskin monotonicity Other meanings of monotonicity in different fields. == References ==
Wikipedia/Monotonicity_(mechanism_design)
The theory of the firm consists of a number of economic theories that explain and predict the nature of the firm, company, or corporation, including its existence, behaviour, structure, and relationship to the market. Firms are key drivers in economics, providing goods and services in return for monetary payments and rewards. Organisational structure, incentives, employee productivity, and information all influence the successful operation of a firm in the economy and within itself. As such major economic theories such as transaction cost theory, managerial economics and behavioural theory of the firm will allow for an in-depth analysis on various firm and management types. == Overview == In simplified terms, the theory of the firm aims to answer these questions: Existence. Why do firms emerge? Why are not all transactions in the economy mediated over the market? Boundaries. Why is the boundary between firms and the market located exactly there in relation to size and output variety? Which transactions are performed internally and which are negotiated on the market? Organization. Why are firms structured in such a specific way, for example as to hierarchy or decentralization? What is the interplay of formal and informal relationships? Heterogeneity of firm actions/performances. What drives different actions and performances of firms? Evidence. What tests are there for the respective theories of the firm? Firms exist as an alternative system to the market-price mechanism when it is more efficient to produce in a non-market environment. For example, in a labour market, it might be very difficult or costly for firms or organizations to engage in production when they have to hire and fire their workers depending on demand/supply conditions. It might also be costly for employees to shift companies every day looking for better alternatives. Similarly, it may be costly for companies to find new suppliers daily. Thus, firms engage in a long-term contract with their employees or a long-term contract with suppliers to minimize the cost or maximize the value of property rights. == Background == The First World War period saw a change of emphasis in economic theory away from industry-level analysis which mainly included analyzing markets to analysis at the level of the firm, as it became increasingly clear that perfect competition was no longer an adequate model of how firms behaved. Economic theory until then had focused on trying to understand markets alone and there had been little study on understanding why firms or organisations exist. Markets are guided by prices and quality as illustrated by vegetable markets where a buyer is free to switch sellers in an exchange. The need for a revised theory of the firm was emphasized by empirical studies by Adolf Berle and Gardiner Means, who made it clear that ownership of a typical American corporation is spread over a wide number of shareholders, leaving control in the hands of managers who own very little equity themselves. R. L. Hall and Charles J. Hitch found that executives made decisions by rule of thumb rather than in the marginalist way. == Transaction cost theory == According to Ronald Coase's essay "The Nature of the Firm", people begin to organise their production in firms when the transaction cost of coordinating production through the market exchange, given imperfect information, is greater than within the firm. Ronald Coase set out his transaction cost theory of the firm in 1937, making it one of the first (neo-classical) attempts to define the firm theoretically in relation to the market. One aspect of its 'neoclassicism' lies in presenting an explanation of the firm consistent with constant returns to scale, rather than relying on increasing returns to scale. Another is in defining a firm in a manner which is both realistic and compatible with the idea of substitution at the margin, so instruments of conventional economic analysis apply. He notes that a firm's interactions with the market may not be under its control (for instance because of sales taxes), but its internal allocation of resources are: “Within a firm, … market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur … who directs production.” He asks why alternative methods of production (such as the price mechanism and economic planning), could not either achieve all production, so that either firms use internal prices for all their production, or one big firm runs the entire economy. Coase begins from the standpoint that markets could in theory carry out all production and that what needs to be explained is the existence of the firm, with its "distinguishing mark … [of] the supersession of the price mechanism." Coase identifies some reasons why firms might arise, and dismisses each as unimportant: if some people prefer to work under the direction and are prepared to pay for the privilege (but this is unlikely); if some people prefer to direct others and are prepared to pay for this (but generally people are paid more to direct others); if purchasers prefer goods produced by firms. Instead, for Coase the main reason to establish a firm is to avoid some of the transaction costs of using the price mechanism. These include discovering relevant prices (which can be reduced but not eliminated by purchasing this information through specialists), as well as the costs of negotiating and writing enforceable contracts for each transaction (which can be large if there is uncertainty). Moreover, contracts in an uncertain world will necessarily be incomplete and have to be frequently re-negotiated. The costs of haggling about the division of surplus, particularly if there is asymmetric information and asset specificity, may be considerable. If a firm operated internally under the market system, many contracts would be required (for instance, even for procuring a pen or delivering a presentation). In contrast, a real firm has very few (though much more complex) contracts, such as defining a manager's power of direction over employees, in exchange for which the employee is paid. These kinds of contracts are drawn up in situations of uncertainty, in particular for relationships that last over long periods of time. Such a situation runs counter to neo-classical economic theory. The neo-classical market is instantaneous, forbidding the development of extended agent-principal (employee-manager) relationships, planning, and of trust. Coase concludes that “a firm is likely therefore to emerge in those cases where a very short-term contract would be unsatisfactory”, and that “it seems improbable that a firm would emerge without the existence of uncertainty”. He notes that government measures relating to the market (sales taxes, rationing, price controls) tend to increase the size of firms, since firms internally would not be subject to such transaction costs. Thus, Coase defines the firm as "the system of relationships which comes into existence when the direction of resources is dependent on the entrepreneur." We can therefore think of a firm as getting larger or smaller based on whether the entrepreneur organises more or fewer transactions. The question then arises of what determines the size of the firm; why does the entrepreneur organise the transactions he does, why no more or less? Since the reason for the firm's being is to have lower costs than the market, the upper limit on the firm's size is set by costs rising to the point where internalising an additional transaction equals the cost of making that transaction in the market. (At the lower limit, the firm's costs exceed the market's costs, and it does not come into existence.) In practice, diminishing returns to management contribute most to raising the costs of organising a large firm, particularly in large firms with many different plants and differing internal transactions (such as a conglomerate), or if the relevant prices change frequently. Coase concludes by saying that the size of the firm is dependent on the costs of using the price mechanism, and on the costs of organisation of other entrepreneurs. These two factors together determine how many products a firm produces and how much of each. === Reconsiderations of transaction cost theory === According to Louis Putterman, most economists accept distinction between intra-firm and interfirm transaction but also that the two shade into each other; the extent of a firm is not simply defined by its capital stock. George Barclay Richardson for example, notes that a rigid distinction fails because of the existence of intermediate forms between firm and market such as inter-firm co-operation. Klein (1983) asserts that “Economists now recognise that such a sharp distinction does not exist and that it is useful to consider also transactions occurring within the firm as representing market (contractual) relationships.” The costs involved in such transactions that are within a firm or even between the firms are the transaction costs. Ultimately, whether the firm constitutes a domain of bureaucratic direction that is shielded from market forces or simply “a legal fiction”, “a nexus for a set of contracting relationships among individuals” (as Jensen and Meckling put it) is “a function of the completeness of markets and the ability of market forces to penetrate intra-firm relationships”. == Managerial and behavioural theories == It was only in the 1960s that the neo-classical theory of the firm was seriously challenged by alternatives such as managerial and behavioral theories. Managerial theories of the firm, as developed by William Baumol (1959 and 1962), Robin Marris (1964) and Oliver E. Williamson (1966), suggest that managers would seek to maximise their own utility and consider the implications of this for firm behavior in contrast to the profit-maximising case. (Baumol suggested that managers’ interests are best served by maximising sales after achieving a minimum level of profit which satisfies shareholders.) More recently this has developed into ‘principal–agent’ analysis (e.g., Spence and Zeckhauser and Ross (1973) on problems of contracting with asymmetric information) which models a widely applicable case where a principal (a shareholder or firm for example) cannot costlessly infer how an agent (a manager or supplier, say) is behaving. This may arise either because the agent has greater expertise or knowledge than the principal, or because the principal cannot directly observe the agent's actions; it is asymmetric information that leads to a problem of moral hazard. This means that to an extent managers can pursue their own interests. Traditional managerial models typically assume that managers, instead of maximising profit, maximise a simple objective utility function (this may include salary, perks, security, power, prestige) subject to an arbitrarily given profit constraint (profit satisficing). === Behavioural approach === The behavioural approach, as developed in particular by Richard Cyert and James G. March of the Carnegie School places emphasis on explaining how decisions are taken within the firm, and goes well beyond neoclassical economics. Much of this depended on Herbert A. Simon's work in the 1950s concerning behaviour in situations of uncertainty, which argued that “people possess limited cognitive ability and so can exercise only ‘bounded rationality’ when making decisions in complex, uncertain situations”. Thus individuals and groups tend to "satisfice"—that is, to attempt to attain realistic goals, rather than maximize a utility or profit function. Cyert and March argued that the firm cannot be regarded as a monolith, because different individuals and groups within it have their own aspirations and conflicting interests, and that firm behaviour is the weighted outcome of these conflicts. Organisational mechanisms (such as "satisficing" and sequential decision-taking) exist to maintain conflict at levels that are not unacceptably detrimental. Compared to ideal state of productive efficiency, there is organisational slack (Leibenstein's X-inefficiency). === Team production === Armen Alchian and Harold Demsetz's analysis of team production extends and clarifies earlier work by Coase. Thus according to them the firm emerges because extra output is provided by team production, but the success of this depends on being able to manage the team so that metering problems (it is costly to measure the marginal outputs of the co-operating inputs for reward purposes) and attendant shirking (the moral hazard problem) can be overcome, by estimating marginal productivity by observing or specifying input behaviour. Such monitoring as is therefore necessary, however, can only be encouraged effectively if the monitor is the recipient of the activity's residual income (otherwise the monitor herself would have to be monitored, ad infinitum). For Alchian and Demsetz, the firm, therefore, is an entity that brings together a team that is more productive working together than at arm's length through the market, because of informational problems associated with monitoring of effort. In effect, therefore, this is a "principal-agent" theory, since it is asymmetric information within the firm which Alchian and Demsetz emphasise must be overcome. In Barzel (1982)’s theory of the firm, drawing on Jensen and Meckling (1976), the firm emerges as a means of centralising monitoring and thereby avoiding costly redundancy in that function (since in a firm the responsibility for monitoring can be centralised in a way that it cannot if production is organised as a group of workers each acting as a firm). The weakness in Alchian and Demsetz's argument, according to Williamson, is that their concept of team production has quite a narrow range of applications, as it assumes outputs cannot be related to individual inputs. In practice, this may have limited applicability (small work group activities, the largest perhaps a symphony orchestra), since most outputs within a firm (such as manufacturing and secretarial work) are separable so that individual inputs can be rewarded on the basis of outputs. Hence team production cannot offer the explanation of why firms (in particular, large multi-plant and multi-product firms) exist. == Asset specificity == For Oliver E. Williamson, the existence of firms derives from ‘asset specificity’ in production, where assets are specific to each other such that their value is much less in a second-best use. This causes problems if the assets are owned by different firms (such as purchaser and supplier), because it will lead to protracted bargaining concerning the gains from trade, because both agents are likely to become locked into a position where they are no longer competing with a (possibly large) number of agents in the entire market, and the incentives are no longer there to represent their positions honestly: large-numbers bargaining is transformed into small-number bargaining. If the transaction is a recurring or lengthy one, re-negotiation may be necessary as a continual power struggle takes place concerning the gains from trade, further increasing the transaction costs. Moreover, there are likely to be situations where a purchaser may require a particular, firm-specific investment of a supplier which would be profitable for both; but after the investment has been made it becomes a sunk cost and the purchaser can attempt to re-negotiate the contract such that the supplier may make a loss on the investment (this is the hold-up problem, which occurs when either party asymmetrically incurs substantial costs or benefits before being paid for or paying for them). In this kind of situation, the most efficient way to overcome the continual conflict of interest between the two agents (or coalitions of agents) may be the removal of one of them from the equation by takeover or merger. Asset specificity can also apply to some extent to both physical and human capital so that the hold-up problem can also occur with labour (e.g. labour can threaten a strike, because of the lack of good alternative human capital; but equally the firm can threaten to fire). Probably the best constraint on such opportunism is reputation (rather than the law, because of the difficulty of negotiation, composition, and enforcement of contracts). If a reputation for opportunism significantly damages an agent's dealings in the future, this alters the incentives to be opportunistic. Williamson sees the limit on the size of the firm as being given partly by costs of delegation (as a firm's size increases its hierarchical bureaucracy does too), and the large firm's increasing inability to replicate the high-powered incentives of the residual income of an owner-entrepreneur. This is partly because it is in the nature of a large firm that its existence is more secure and less dependent on the actions of any one individual (increasing the incentives to shirk), and because intervention rights from the central characteristic of a firm tend to be accompanied by some form of income insurance to compensate for the lesser responsibility, thereby diluting incentives. Milgrom and Roberts (1990) explain the increased cost of management as due to the incentives of employees to provide false information beneficial to themselves, resulting in costs to managers of filtering information, and often the making of decisions without full information. This grows worse with firm size and more layers in the hierarchy. Empirical analyses of transaction costs have attempted to measure and operationalize transaction costs. Research that attempts to measure transaction costs is the most critical limit to efforts to potential falsification and validation of transaction cost economics. == Boundaries of the firm == Boundaries of the firm explores the restrictions on size and output variety of firms, and how and why these restrictions affect production and enterprise success. There are two boundaries, horizontal, and vertical. As part of their corporate strategy, firms must choose between being horizontally broad, vertically deep, or both. Firms with horizontal breadth have numerous product lines or types, whereas firms with vertical depth are integrated into various stages of the value chain. Generally, a firm's capabilities are specific to a particular scope direction, for example, marketing skills lead to horizontal breadth, and production expertise lead to vertical depth. A firm is horizontally broad when it utilises excess indivisible resources to expand into various products, and obtain scope economies. Horizontally broad firms leverage capabilities such as marketing skills, product knowledge, customer service, and reputation for their expansions. Scope economies, or economies of scope, describe the aspect of production wherein cost savings result from the scope of an enterprise, as opposed to its scale (see economies of scale). Meaning, there are economies of scope where it is less expensive for firms to combine two or more product lines into one, than it is to produce each product separately. Scope economies, wherein resources are synergistically used, has been found to improve firm performance. However, coordination, adjustment and execution costs related to producing products synergistically are limiting factors. A firm is vertically deep if it possesses stronger capabilities than external producers, and thus can produce and distribute its goods or services more efficiently internally - either upstream or downstream on the manufacturing chain. Vertically deep firms leverage capabilities such as production and process expertise, including technology selection, asset utilisation, and supply chain management. Vertical depth often improves a firm's governance of activities, and contributes to a beneficial exploitation of internal capabilities, but is limited by the costs of hierarchical management, such as monitoring and coordination. The concept of boundaries can be linked to Coase's understanding of The Nature of the Firm, as it recognises that transaction costs are a significant factor in a firm's decision to outsource, or internally produce, but also considers other influences specific to firms, such as their relevant capabilities, and governance decisions. === Importance of boundaries === A study of firms in France illustrated how distortions to the number of employees and size of a firm directly impacts levels of productivity, wage and welfare within the organisation. Firms with at least 50 workers are subject to a number of additional regulations, which leads some firms to stay below the 50-worker threshold. The distortion acts like an additional tax on hiring workers, thereby preventing the reallocation from less productive to more productive firms, and reducing overall welfare. == Economic theory of outsourcing == In economic theory, the pros and cons of outsourcing have been discussed since Ronald Coase (1937) asked the famous question: Why is not all production carried on by one big firm? An informal answer has been provided by Oliver Williamson (1979), who has emphasized the importance of different transaction costs within and between firms. The boundaries of the firm (i.e., the distinction between transactions taking place within a firm and transactions between different firms) have been formally studied by Oliver Hart (1995) and his coauthors. According to the property rights approach to the theory of the firm based on incomplete contracting, the ownership structure (i.e., integration or non-integration) determines how the returns to non-contractible investments will be divided in future negotiations. Hence, whether or not outsourcing an activity to a different firm is optimal depends on the relative importance of the investments that the trading partners have to make. For instance, if only one party has to make an important non-contractible investment decision, then this party should be owner. However, the conclusions of the incomplete contracting theory crucially rely on the specification of the negotiations protocol and on whether or not there is asymmetric information. == Firm as a Sociotechnical System == The concept of viewing firms as sociotechnical systems finds its roots in the studies conducted by researchers at The Tavistock Institute of Human Relations, particularly the seminal works of Trist and Bamforth and Emery and Trist. These pioneering scholars observed, through extensive field observations employing a systemic perspective, that firms could be comprehended as structured sociotechnical systems. These systems were recognized as being open to the environment, possessing the capacity for self-regulation to achieve their objectives, and adapting by creating alternative pathways when necessary. === Sociotechnical Approach === The sociotechnical approach delineates firms not merely as economic entities but as systems that amalgamate social and technical facets. It delves into the interplay between the human and technological elements within organizations, emphasizing the interconnectedness and interdependence between the social structure—comprising people, relationships, and interactions—and the technical system—encompassing tools, processes, and resources. This approach acknowledges that the effectiveness and functionality of a firm arise not solely from its technical prowess but also from the way its social system interacts and interfaces with the technical framework. The dynamic between these systems, as articulated by Trist, Bamforth, Emery, and Trist, illustrates the need for an integrated understanding of human behavior, organizational culture, and technological systems within the framework of a firm. == Evolutionary and Complexity Theory-Based Approaches. == Evolutionary approaches to understanding firms arose as a parallel branch to classical theories, stemming from the pioneering work of Joseph A. Schumpeter. Schumpeter diverged from the abstract concept of the firm, introducing the notion that each firm possesses a distinct structural identity. He unified the creation and management of a firm into a single economic theory, emphasizing the dynamic nature of firms as evolving entities that learn and innovate within their fundamental routines. He also differentiated between firm development and growth, previously considered interlinked concepts. === Symbiotic Perspective === This structural description paved the way for Terra and Passador to propose a dynamic perspective on firms that goes beyond profit-centric views. The authors utilize sociotechnical concepts, describing firms where the social system meets the self-regulation and self-preservation requirements proposed by Luhmann, imparting an autoreferential dynamic to this subsystem, while technical structures exhibit a goal-oriented dynamic. These two systems symbiotically form the firm's supersystem, also manifesting an autoreferential dynamic, where social systems act as the mind animating the organization's physical body. From this standpoint, firms represent a system traversed by a continuous flow of information and resources, enclosed within themselves, ensuring their unity. Therefore, they lack inputs or outputs in the same sense as in finalistic views of firms. Due to their structural determinism, once the system emerges, its development inherently involves a history of recurrent interactions within the environment that both emerges with it and contains it. Both the system's structure and the environment spontaneously change congruently and complementarily as the firm strives to maintain its organization and operational coherence. Its ultimate product refers not to its outputs per se but to its own organization and realization of identity and autonomy. As an organization is a self-referential entity, enclosed within operational closure, its function focuses on its own constitution. In this context, the exchanges it conducts with its supra-systems merely represent disturbances and residues allowing it to capture from the environment the necessary order for its survival and sustenance of its identity. This contrasts with finalistic conceptions of firms, where the scope is to meet external demands. Under this perspective, the firm's purpose is to ensure its own existence. ==== Boundaries ==== Under the perspective of the firm as a symbiotic entity, boundaries are defined through its operational closure. These boundaries encompass not only hierarchical relationships among agents but also various classes of relations linking social agents to a particular technical and social system. This occurs through the values and bonds of trust established by agents, ensuring the self-production of the organization's values and their relative stability over time. ==== Viable Contour ==== The viability of the firm, as a self-referential entity enclosed within operational closure, is linked to the rate of regeneration of its sociotechnical systems and the flow of resources and information traversing it. If the rate of disintegration exceeds the pace at which the firm can repair itself, the structure of this network of interactions unravels. This makes disintegration a powerful constraint on the maximum size for a viable contour structure. The flow of resources and information also places the firm in a situation of constant threat since such structures rely on relationships with the environment to sustain their dynamics. This underscores the necessity for an adjustment field that compensates for environmental disturbances—a crucial factor in preventing the system from reaching thermodynamic equilibrium, which ultimately signifies the demise of the structure. ==== Social Attractors ==== Experiments conducted by Terra and Passador underscored the significant role of attraction basins governing firm dynamics. In this context, technical systems emerged as the central element of organizational dynamics, around which social attractors orbit. These social attractors create secondary attraction basins and are surrounded by their own social "satellites" in a structure analogous to a planetary system. Here, the star can be understood as the technical system, the planets as leaders, and other agents as satellites or free bodies not confined to a single social attraction basin but related to the technical system. Although the experiments highlighted technical systems as primary attractors, the authors' model also demonstrates a recursion in this system, where agents contribute to what attracts them in the technical system, just as the technical system shapes social structures by attracting agents. Hence, an intimate and symbiotic relationship exists between the social and technical systems, wherein the former shapes the latter. This grants leaders a crucial role in the growth and regeneration of structures since their control capacity directly impacts the organization's viable boundary. The model also reveals that relocating or including an agent or subsystem in an organization can affect its dynamics by altering the attraction basins governing it. This may lead to undesired qualitative leaps or even rupture of the organization's self-referential network, potentially resulting in the collapse of one of its subsystems. Simultaneously, such restructuring in relationships and social attraction basins can also promote innovation, akin to DNA mutations, creating new dynamics and altering the variety and redundancy within organizations. ==== Essential conditions for a firm's emergence ==== Regarding the essential conditions for a firm's emergence and sustenance, Terra and Passador identified four crucial elements: (1) the ability to integrate external agents into its formal network of relations; (2) being pervaded by a resource flow sustaining its self-referential network; (3) offering advantages for agents to associate with it; and (4) the capability to regenerate its formal network of relations when an agent is lost, especially at the supervisory level. While regeneration of the formal network of relations appeared possible without specialized structures, organizations lacking such systems tend to be structurally unstable. Establishing routines specialized in replacing and reconstituting the social network enhances stability and significantly extends the organization's lifespan. This suggests that mechanisms specialized in reconstructing the organization's social network topology, even in simplified forms, are vital to ensure the longevity of such structures. ==== Relationships with the environment and sustainability ==== The theory of Symbiotic Dynamics is based on the intimate association between organizations and the systems that surround them, in such a way that the survival of these is correlated. Thus, it is important for the organization's survival that the deterioration and transformation of supersystems, such as markets, society, and the environment, occur at a pace that allows them to regenerate to maintain their identity and organization, or that enables the firm itself to adapt to the new realities imposed by qualitative leaps that may occur in the dynamics of supersystems. If this need is neglected, it can lead the environment to deteriorate at a rate greater than the compensatory fields of organizations can support, leading them to disintegrate. In this context, organizations need to be guided by a hybrid logic, blending proactivity and reactivity, where organizations recognize their impact on the environment as a whole and act in an organized manner to reduce their degeneration, while adapting to the demands that may arise from these interactions. In the context at hand, organizations need to include in their decisions all the other systems with which they are coupled, making it possible to envision the construction of complex socio-economic systems where they integrate in a stable and sustainable manner. == Other models == Efficiency wage models like that of Shapiro and Stiglitz (1984) suggest wage rents as an addition to monitoring, since this gives employees an incentive not to shirk, given a certain probability of detection and the consequence of being fired. Williamson, Wachter and Harris (1975) suggest promotion incentives within the firm as an alternative to morale-damaging monitoring, where promotion is based on objectively measurable performance. (The difference between these two approaches may be that the former is applicable to a blue-collar environment, the latter to a white-collar one). Leibenstein (1966) sees a firm's norms or conventions, dependent on its history of management initiatives, labour relations and other factors, as determining the firm's "culture" of effort, thus affecting the firm's productivity and hence size. George Akerlof (1982) develops a gift exchange model of reciprocity, in which employers offer wages unrelated to variations in output and above the market level, and workers have developed a concern for each other's welfare, such that all put in effort above the minimum required, but the more able workers are not rewarded for their extra productivity; again, size here depends not on rationality or efficiency but on social factors. In sum, the limit to the firm's size is given where costs rise to the point where the market can undertake some transactions more efficiently than the firm. Recently, Yochai Benkler further questioned the rigid distinction between firms and markets based on the increasing salience of “commons-based peer production” systems such as open source software (e.g., Linux), Wikipedia, Creative Commons, etc. He put forth this argument in The Wealth of Networks: How Social Production Transforms Markets and Freedom, which was released in 2006 under a Creative Commons share-alike license. == Grossman–Hart–Moore theory == In modern contract theory, the “theory of the firm” is often identified with the “property rights approach” that was developed by Sanford J. Grossman, Oliver D. Hart, and John H. Moore. The property rights approach to the theory of the firm is also known as the “Grossman–Hart–Moore theory”. In their seminal work, Grossman and Hart (1986), Hart and Moore (1990) and Hart (1995) developed the incomplete contracting paradigm. They argue that if contracts cannot specify what is to be done given every possible contingency, then property rights (and hence firm boundaries) matter. Specifically, consider a seller of an intermediate good and a buyer. Should the seller own the physical assets that are necessary to produce the good (non-integration) or should the buyer be the owner (integration)? After relationship-specific investments have been made, the seller and the buyer bargain. When they are symmetrically informed, they will always agree to collaborate. Yet, the division of the ex post surplus depends on the parties’ disagreement payoffs (the payoffs they would get if no ex post agreement were reached), which in turn depend on the ownership structure. Thus, the ownership structure has an influence on the incentives to invest. A central insight of the theory is that the party with the more important investment decision should be the owner. Another prominent conclusion is that joint asset ownership is suboptimal if investments are in human capital. The Grossman–Hart–Moore model has been successfully applied in many contexts, e.g. with regard to privatization. Chiu (1998) and DeMeza and Lockwood (1998) have extended the model by considering different bargaining games that the parties may play ex post (which can explain ownership by the less important investor). Oliver Williamson (2002) has criticized the Grossman–Hart–Moore model because it is focused on ex ante investment incentives, while it neglects ex post inefficiencies. Schmitz (2006) has studied a variant of the Grossman–Hart–Moore model in which a party may have or acquire private information about its disagreement payoff, which can explain ex post inefficiencies and ownership by the less important investor. Several variants of the Grossman–Hart–Moore model such as the one with private information can also explain joint ownership. == See also == == Notes == == References == Crew, Michael A. (1975). Theory of the Firm. New York: Longman. p. 182. ISBN 978-0-582-44042-5. Clarke, Roger; McGuinness, Tony (1987). The Economics of the Firm. Cambridge: Blackwell. ISBN 978-0-631-14075-7. Foss, Nicolai J., ed. (2000). The Theory of the Firm: Critical Perspectives on Business and Management. Taylor and Francis. v. I–IV. Chapter preview links, including Bengt Holmström and Jean Tirole, "The Theory of the Firm," v. I, pp. 148–222 Holmstrom, Bengt R.; Tirole, Jean (1989). "Chapter 2 the theory of the firm". Handbook of Industrial Organization Volume 1. Vol. 1. pp. 61–133. doi:10.1016/S1573-448X(89)01005-8. ISBN 9780444704344. Robé, Jean-Philippe (31 January 2011). "The Legal Structure of the Firm". Accounting, Economics, and Law. 1 (1). doi:10.2202/2152-2820.1001. S2CID 167919558. Garicano, Luis; Lelarge, Claire; Van Reenen, John (1 November 2016). "Firm Size Distortions and the Productivity Distribution: Evidence from France". American Economic Review. 106 (11): 3439–3479. doi:10.1257/aer.20130232. hdl:10419/71683. S2CID 4929270. == Further reading == Kroszner, Randall S.; Putterman, Louis, eds. (2009). The Economic Nature of the Firm: A Reader (3rd ed.) Cambridge University Press. Aghion, Philippe; Holden, Richard (1 May 2011). "Incomplete Contracts and the Theory of the Firm: What Have We Learned over the Past 25 Years?". Journal of Economic Perspectives. 25 (2): 181–197. doi:10.1257/jep.25.2.181. JSTOR 23049459. S2CID 55679839.
Wikipedia/Theory_of_the_firm
A strategy game or strategic game is a game in which the players' uncoerced, and often autonomous, decision-making skills have a high significance in determining the outcome. Almost all strategy games require internal decision tree-style thinking, and typically very high situational awareness. Strategy games are also seen as a descendant of war games, and define strategy in terms of the context of war, but this is more partial. A strategy game is a game that relies primarily on strategy, and when it comes to defining what strategy is, two factors need to be taken into account: its complexity and game-scale actions, such as each placement in the Total War video game series. The definition of a strategy game in its cultural context should be any game that belongs to a tradition that goes back to war games, contains more strategy than the average video game, contains certain gameplay conventions, and is represented by a particular community. Although war is dominant in strategy games, it is not the whole story. == History == The history of turn-based strategy games goes back to the times of ancient civilizations found in places such as Rome, Greece, Egypt, the Levant, and India. Many were played widely through their regions of origin, but only some are still played today. According to Thierry Depaulis, oldest strategy games would be the "Greek game of polis (πόλις), which appears in the literature around 450 BCE, and the more or less contemporary Chinese game of weiqi (‘go’), which, under the name of yi (弈), is mentioned in Confucius’s Analects (Lunyu) compiled between ca 470/50 and 280 BCE." The Royal Game of Ur from c. 2500 BCE which often been called one of the oldest board games, likely had some strategy elements as well, although it is generally seen as a luck-based race game. One of the earliest strategy games still played is mancala. Due to claims that some artifacts from c. 5000 BCE might be old mancala boards, it has been suggested that mancala may be the oldest known strategy game, but this claim has been disputed. Another game that has stood the test of time is chess, believed to have originated in India around the sixth century CE. The game spread to the west by trade, but chess gained social status and permanence more strongly than many other games. Chess became a game of skill and tactics often forcing the players to think two or three moves ahead of their opponent just to keep up. == Types == === Abstract strategy === In abstract strategy games, the game is only loosely tied to a thematic concept, if at all. The rules do not attempt to simulate reality, but rather serve the internal logic of the game. A purist's definition of an abstract strategy game requires that it cannot have random elements or hidden information. This definition includes such games as chess and Go. However, many games are commonly classed as abstract strategy games which do not meet these criteria: games such as backgammon, Octiles, Can't Stop, Sequence and Mentalis have all been described as "abstract strategy" games despite having a chance element. A smaller category of non-perfect abstract strategy games incorporate hidden information without using any random elements; for example, Stratego. === Team strategy === One of the most focused team strategy games is contract bridge. This card game consists of two teams of two players, whose offensive and defensive skills are continually in flux as the game's dynamic progresses. Some argue that the benefits of playing this team strategy card game extend to those skills and strategies used in business and that the playing of these games helps to automate strategic awareness. === Eurogames === Eurogames, or German-style boardgames, are a relatively new genre that sit between abstract strategy games and simulation games. They generally have simple rules, short to medium playing times, indirect player interaction and abstract physical components. The games emphasize strategy, play down chance and conflict, lean towards economic rather than military themes, and usually keep all the players in the game until it ends. === Simulation === This type of game is an attempt to simulate the decisions and processes inherent to some real-world situation. Most of the rules are chosen to reflect what the real-world consequences would be of each player's actions and decisions. Abstract games cannot be completely divided from simulations and so games can be thought of as existing on a continuum of almost pure abstraction (like Abalone) to almost pure simulation (like Diceball! or Strat-o-Matic Baseball). === Wargame === Wargames are simulations of military battles, campaigns, or entire wars. Players will have to consider situations that are analogous to the situations faced by leaders of historical battles. As such, wargames are usually heavy on simulation elements, and while they are all "strategy games", they can also be "strategic" or "tactical" in the military jargon sense. Its creator, H. G. Wells, stated how "much better is this amiable miniature [war] than the real thing". Traditionally, wargames have been played either with miniatures, using physical models of detailed terrain and miniature representations of people and equipment to depict the game state; or on a board, which commonly uses cardboard counters on a hex map. Popular miniature wargames include Warhammer 40,000 or its fantasy counterpart Warhammer Fantasy. Popular strategic board wargames include Risk, Axis and Allies, Diplomacy, and Paths of Glory. Advanced Squad Leader is a successful tactical scale wargame. It is instructive to compare the Total War series to the Civilization series, where moving troops to a specific tile is a tactic because there are no short-range decisions. But in Empire: Total War (2009), every encounter between two armies activates a real-time mode in which they must fight and the same movement of troops is treated as a strategy. Throughout the game, the movement of each army is at a macro scale, because the player can control each battle at a micro scale. However, as an experience, the two types of military operations are quite similar and involve similar skills and thought processes. The concept of micro scale and macro scale can well describe the gameplay of a game; however, even very similar games can be difficult to integrate into a common vocabulary. In this definition, strategy does not explicitly describe the player's experience; it is more appropriate to describe different formal game components. The similarity of the actions taken in two different games does not affect our definition of them as strategy or tactics: we will only rely on their scale in their respective games. === Strategy video games === Strategy video games are categorized based on whether they offer the continuous gameplay of real-time strategy (RTS), or the discrete phases of turn-based strategy (TBS). Often the computer is expected to emulate a strategically thinking "side" similar to that of a human player (such as directing armies and constructing buildings), or emulate the "instinctive" actions of individual units that would be too tedious for a player to administer (such as for a peasant to run away when attacked, as opposed to standing still until otherwise ordered by the player); hence there is an emphasis on artificial intelligence. == See also == Game of chance Game of skill Mind sport == References ==
Wikipedia/Strategy_game
The managerial grid model or managerial grid theory (1964) is a model, developed by Robert R. Blake and Jane Mouton, of leadership styles. This model originally identified five different leadership styles based on the concern for people and the concern for production. The optimal leadership style in this model is based on Theory Y. The grid theory has continued to evolve and develop. The theory was updated with two additional leadership styles and with a new element, resilience. In 1999, the grid managerial seminar began using a new text, The Power to Change. The model is represented as a grid with concern for production as the x-axis and concern for people as the y-axis; each axis ranges from 1 (Low) to 9 (High). The resulting leadership styles are as follows: The indifferent (previously called impoverished) style (1,1): evade and elude. In this style, managers have low concern for both people and production. Managers use this style to preserve job and job seniority, protecting themselves by avoiding getting into trouble. The main concern for the manager is not to be held responsible for any mistakes, which results in less innovative decisions. The accommodating (previously, country club) style (1,9): yield and comply. This style has a high concern for people and a low concern for production. Managers using this style pay much attention to the security and comfort of the employees, in hopes that this will increase performance. The resulting atmosphere is usually friendly, but not necessarily very productive. The dictatorial (previously, produce or perish) style (9,1): in return. Managers using this style pressure their employees through rules and punishments to achieve the company goals. This dictatorial style is based on Theory X of Douglas McGregor, and is commonly applied in companies on the edge of real or perceived failure. This style is often used in cases of crisis management. The status quo (previously, middle-of-the-road) style (5,5): balance and compromise. Managers using this style try to balance between company goals and workers' needs. By giving some concern to both people and production, managers who use this style hope to achieve suitable performance but doing so gives away a bit of each concern so that neither production nor people needs are met. The sound (previously, team) style (9,9): contribute and commit. In this style, high concern is paid both to people and production. As suggested by the propositions of Theory Y, managers choosing to use this style encourage teamwork and commitment among employees. This method relies heavily on making employees feel themselves to be constructive parts of the company. The opportunistic style: exploit and manipulate. Individuals using this style, which was added to the grid theory before 1999, do not have a fixed location on the grid. They adopt whichever behaviour offers the greatest personal benefit. The paternalistic style: prescribe and guide. This style was added to the grid theory before 1999. In The Power to Change, it was redefined to alternate between the (1,9) and (9,1) locations on the grid. Managers using this style praise and support, but discourage challenges to their thinking. == Behavioral elements == Grid theory breaks behavior down into seven key elements: == See also == Behavior modification Leadership Three levels of leadership model == References == Blake, R.; Mouton, J. (1964). The Managerial Grid: The Key to Leadership Excellence. Houston: Gulf Publishing Co. Blake, R.; Mouton, J. (1985). The Managerial Grid III: The Key to Leadership Excellence. Houston: Gulf Publishing Co. McKee, R.; Carlson, B. (1999). The Power to Change. Austin, Texas: Grid International Inc.
Wikipedia/Managerial_grid_model
In game theory, farsightness refers to players’ ability to consider the long-term consequences of their strategies, beyond immediate payoffs, often formalized as farsighted stability where players anticipate future moves and stable outcomes. In static games, players optimize payoffs based on current information, as in the Nash equilibrium, but farsightedness involves anticipating dynamic or repeated interactions, such as in coalition games like hedonic games where preferences shape long-term alliances. For example, in a repeated Prisoner's Dilemma, a farsighted player might cooperate to encourage future cooperation, unlike the one-shot case where defection prevails. Similarly, a player might refuse a small immediate payoff to build a more valuable alliance later. Farsightedness assumes significant foresight and computational ability, which may be unrealistic in complex scenarios. In evolutionary settings, myopic strategies might dominate if immediate survival outweighs long-term planning. == Applications == In evolutionary game theory, farsightedness contrasts with myopic adaptation, where strategies adjust based on immediate fitness. A farsighted strategy might aim for an evolutionarily stable strategy (ESS) that withstands long-term mutant challenges. In coalition settings, farsighted players assess how current choices affect future stability, rejecting short-term gains for long-term benefits, as seen in hedonic games. Farsighted stability captures this by modeling chains of responses predicting stable configurations. == See also == Evolutionary game theory Cooperative game theory Hedonic games Subgame perfect equilibrium Repeated game Evolutionarily stable strategy == References ==
Wikipedia/Farsightedness_(game_theory)
In game theory, a Markov strategy is a strategy that depends only on the current state of the game, rather than the full history of past actions. The state summarizes all relevant past information needed for decision-making. For example, in a repeated game, the state could be the outcome of the most recent round or any summary statistic that captures the strategic situation or recent sequence of play. A profile of Markov strategies forms a Markov perfect equilibrium if it constitutes a Nash equilibrium in every possible state of the game. Markov strategies are widely used in dynamic and stochastic games, where the state evolves over time according to probabilistic rules. Although the concept is named after Andrey Markov due to its reliance on the Markov property—the idea that only the current state matters—the strategy concept itself was developed much later in the context of dynamic game theory. == References ==
Wikipedia/Markov_strategy
In microeconomics, the Bertrand–Edgeworth model of price-setting oligopoly explores what happens when firms compete to sell a homogeneous product (a good for which consumers buy only from the cheapest available seller) but face limits on how much they can supply. Unlike in the standard Bertrand competition model, where firms are assumed to meet all demand at their chosen price, the Bertrand–Edgeworth model assumes each firm has a capacity constraint: a fixed maximum output it can sell, regardless of price. This constraint may be physical (as in Edgeworth’s formulation) or may depend on price or other conditions. A key result of the model is that pure-strategy price equilibria may fail to exist, even with just two firms, because firms have an incentive to undercut competitors' prices until they hit their capacity constraints. As a result, the model can lead to price cycles or the emergence of mixed-strategy equilibria, where firms randomize over prices. == History == Joseph Louis François Bertrand (1822–1900) developed the model of Bertrand competition in oligopoly. This approach was based on the assumption that there are at least two firms producing a homogenous product with constant marginal cost (this could be constant at some positive value, or with zero marginal cost as in Cournot). Consumers buy from the cheapest seller. The Bertrand–Nash equilibrium of this model is to have all (or at least two) firms setting the price equal to marginal cost. The argument is simple: if one firm sets a price above marginal cost then another firm can undercut it by a small amount (often called epsilon undercutting, where epsilon represents an arbitrarily small amount) thus the equilibrium is zero (this is sometimes called the Bertrand paradox). The Bertrand approach assumes that firms are willing and able to supply all demand: there is no limit to the amount that they can produce or sell. Francis Ysidro Edgeworth considered the case where there is a limit to what firms can sell (a capacity constraint): he showed that if there is a fixed limit to what firms can sell, then there may exist no pure-strategy Nash equilibrium (this is sometimes called the Edgeworth paradox). Martin Shubik developed the Bertrand–Edgeworth model to allow for the firm to be willing to supply only up to its profit maximizing output at the price which it set (under profit maximization this occurs when marginal cost equals price). He considered the case of strictly convex costs, where marginal cost is increasing in output. Shubik showed that if a Nash equilibrium exists, it must be the perfectly competitive price (where demand equals supply, and all firms set price equal to marginal cost). However, this can only happen if market demand is infinitely elastic (horizontal) at the competitive price. In general, as in the Edgeworth paradox, no pure-strategy Nash equilibrium will exist. Huw Dixon showed that in general a mixed strategy Nash equilibrium will exist when there are convex costs. Dixon’s proof used the Existence Theorem of Partha Dasgupta and Eric Maskin. Under Dixon's assumption of (weakly) convex costs, marginal cost will be non-decreasing. This is consistent with a cost function where marginal cost is flat for a range of outputs, marginal cost is smoothly increasing, or indeed where there is a kink in total cost so that marginal cost makes a discontinuous jump upwards. == Later developments and related models == There have been several responses to the non-existence of pure-strategy equilibrium identified by Francis Ysidro Edgeworth and Martin Shubik. Whilst the existence of mixed-strategy equilibrium was demonstrated by Huw Dixon, it has not proven easy to characterize what the equilibrium actually looks like. However, Allen and Hellwig were able to show that in a large market with many firms, the average price set would tend to the competitive price. It has been argued that non-pure strategies are not plausible in the context of the Bertrand–Edgworth model. Alternative approaches have included: Firms choose the quantity they are willing to sell up to at each price. This is a game in which price and quantity are chosen: as shown by Allen and Hellwig and in a more general case by Huw Dixon that the perfectly competitive price is the unique pure-strategy equilibrium. Firms have to meet all demand at the price they set as proposed by Krishnendu Ghosh Dastidar or pay some cost for turning away customers. Whilst this can ensure the existence of a pure-strategy Nash equilibrium, it comes at the cost of generating multiple equilibria. However, as shown by Huw Dixon, if the cost of turning customers away is sufficiently small, then any pure-strategy equilibria that exist will be close to the competitive equilibrium. Introducing product differentiation, as proposed by Jean-Pascal Benassy. This is more of a synthesis of monopolistic competition with the Bertrand–Edgeworth model, but Benassy showed that if the elasticity of demand for the firms output is sufficiently high, then any pure strategy equilibrium that existed would be close to the competitive outcome. "Integer pricing" as explored by Huw Dixon. Rather than treat price as a continuous variable, it is treated as a discrete variable. This means that firms cannot undercut each other by an arbitrarily small amount, one of the necessary ingredients giving rise to the non-existence of a pure strategy equilibrium. This can give rise to multiple pure-strategy equilibria, some of which may be distant from the competitive equilibrium price. More recently, Prabal Roy Chowdhury has combined the notion of discrete pricing with the idea that firms choose prices and the quantities they want to sell at that price as in Allen–Hellwig. Epsilon equilibrium in the pure-strategy game. In an epsilon equilibrium, each firm is within epsilon of its optimal price. If the epsilon is small, this might be seen as a plausible equilibrium, due perhaps to menu costs or bounded rationality. For a given ε > 0 {\displaystyle \varepsilon >0} , if there are enough firms, then an epsilon-equilibrium exists (this result depends on how one models the residual demand – the demand faced by higher-priced firms given the sales of the lower-priced firms). == References == == Resources == Edgeworth and modern oligopoly, Theory Xavier Vives The Pure Theory of Monopoly, Francis Edgeworth
Wikipedia/Bertrand–Edgeworth_model
In mechanism design, a strategyproof (SP) mechanism is a game form in which each player has a weakly-dominant strategy, so that no player can gain by "spying" over the other players to know what they are going to play. When the players have private information (e.g. their type or their value to some item), and the strategy space of each player consists of the possible information values (e.g. possible types or values), a truthful mechanism is a game in which revealing the true information is a weakly-dominant strategy for each player.: 244  An SP mechanism is also called dominant-strategy-incentive-compatible (DSIC),: 415  to distinguish it from other kinds of incentive compatibility. A SP mechanism is immune to manipulations by individual players (but not by coalitions). In contrast, in a group strategyproof mechanism, no group of people can collude to misreport their preferences in a way that makes every member better off. In a strong group strategyproof mechanism, no group of people can collude to misreport their preferences in a way that makes at least one member of the group better off without making any of the remaining members worse off. == Examples == Typical examples of SP mechanisms are: a majority vote between two alternatives; a second-price auction when participants have quasilinear utility; a VCG mechanism when participants have quasilinear utility Typical examples of mechanisms that are not SP are: any deterministic non-dictatorial election between three or more alternatives; a first-price auction === SP in network routing === SP is also applicable in network routing. Consider a network as a graph where each edge (i.e. link) has an associated cost of transmission, privately known to the owner of the link. The owner of a link wishes to be compensated for relaying messages. As the sender of a message on the network, one wants to find the least cost path. There are efficient methods for doing so, even in large networks. However, there is one problem: the costs for each link are unknown. A naive approach would be to ask the owner of each link the cost, use these declared costs to find the least cost path, and pay all links on the path their declared costs. However, it can be shown that this payment scheme is not SP, that is, the owners of some links can benefit by lying about the cost. We may end up paying far more than the actual cost. It can be shown that given certain assumptions about the network and the players (owners of links), a variant of the VCG mechanism is SP. == Formal definitions == There is a set X {\displaystyle X} of possible outcomes. There are n {\displaystyle n} agents which have different valuations for each outcome. The valuation of agent i {\displaystyle i} is represented as a function: v i : X ⟶ R + {\displaystyle v_{i}:X\longrightarrow R_{+}} which expresses the value it has for each alternative, in monetary terms. It is assumed that the agents have Quasilinear utility functions; this means that, if the outcome is x {\displaystyle x} and in addition the agent receives a payment p i {\displaystyle p_{i}} (positive or negative), then the total utility of agent i {\displaystyle i} is: u i := v i ( x ) + p i {\displaystyle u_{i}:=v_{i}(x)+p_{i}} The vector of all value-functions is denoted by v {\displaystyle v} . For every agent i {\displaystyle i} , the vector of all value-functions of the other agents is denoted by v − i {\displaystyle v_{-i}} . So v ≡ ( v i , v − i ) {\displaystyle v\equiv (v_{i},v_{-i})} . A mechanism is a pair of functions: An O u t c o m e {\displaystyle Outcome} function, that takes as input the value-vector v {\displaystyle v} and returns an outcome x ∈ X {\displaystyle x\in X} (it is also called a social choice function); A P a y m e n t {\displaystyle Payment} function, that takes as input the value-vector v {\displaystyle v} and returns a vector of payments, ( p 1 , … , p n ) {\displaystyle (p_{1},\dots ,p_{n})} , determining how much each player should receive (a negative payment means that the player should pay a positive amount). A mechanism is called strategyproof if, for every player i {\displaystyle i} and for every value-vector of the other players v − i {\displaystyle v_{-i}} : v i ( O u t c o m e ( v i , v − i ) ) + P a y m e n t i ( v i , v − i ) ≥ v i ( O u t c o m e ( v i ′ , v − i ) ) + P a y m e n t i ( v i ′ , v − i ) {\displaystyle v_{i}(Outcome(v_{i},v_{-i}))+Payment_{i}(v_{i},v_{-i})\geq v_{i}(Outcome(v_{i}',v_{-i}))+Payment_{i}(v_{i}',v_{-i})} == Characterization == It is helpful to have simple conditions for checking whether a given mechanism is SP or not. This subsection shows two simple conditions that are both necessary and sufficient. If a mechanism with monetary transfers is SP, then it must satisfy the following two conditions, for every agent i {\displaystyle i} :: 226  1. The payment to agent i {\displaystyle i} is a function of the chosen outcome and of the valuations of the other agents v − i {\displaystyle v_{-i}} - but not a direct function of the agent's own valuation v i {\displaystyle v_{i}} . Formally, there exists a price function P r i c e i {\displaystyle Price_{i}} , that takes as input an outcome x ∈ X {\displaystyle x\in X} and a valuation vector for the other agents v − i {\displaystyle v_{-i}} , and returns the payment for agent i {\displaystyle i} , such that for every v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} , if: O u t c o m e ( v i , v − i ) = O u t c o m e ( v i ′ , v − i ) {\displaystyle Outcome(v_{i},v_{-i})=Outcome(v_{i}',v_{-i})} then: P a y m e n t i ( v i , v − i ) = P a y m e n t i ( v i ′ , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})=Payment_{i}(v_{i}',v_{-i})} PROOF: If P a y m e n t i ( v i , v − i ) > P a y m e n t i ( v i ′ , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})>Payment_{i}(v_{i}',v_{-i})} then an agent with valuation v i ′ {\displaystyle v_{i}'} prefers to report v i {\displaystyle v_{i}} , since it gives him the same outcome and a larger payment; similarly, if P a y m e n t i ( v i , v − i ) < P a y m e n t i ( v i ′ , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})<Payment_{i}(v_{i}',v_{-i})} then an agent with valuation v i {\displaystyle v_{i}} prefers to report v i ′ {\displaystyle v_{i}'} . As a corollary, there exists a "price-tag" function, P r i c e i {\displaystyle Price_{i}} , that takes as input an outcome x ∈ X {\displaystyle x\in X} and a valuation vector for the other agents v − i {\displaystyle v_{-i}} , and returns the payment for agent i {\displaystyle i} For every v i , v − i {\displaystyle v_{i},v_{-i}} , if: O u t c o m e ( v i , v − i ) = x {\displaystyle Outcome(v_{i},v_{-i})=x} then: P a y m e n t i ( v i , v − i ) = P r i c e i ( x , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})=Price_{i}(x,v_{-i})} 2. The selected outcome is optimal for agent i {\displaystyle i} , given the other agents' valuations. Formally: O u t c o m e ( v i , v − i ) ∈ arg ⁡ max x [ v i ( x ) + P r i c e i ( x , v − i ) ] {\displaystyle Outcome(v_{i},v_{-i})\in \arg \max _{x}[v_{i}(x)+Price_{i}(x,v_{-i})]} where the maximization is over all outcomes in the range of O u t c o m e ( ⋅ , v − i ) {\displaystyle Outcome(\cdot ,v_{-i})} . PROOF: If there is another outcome x ′ = O u t c o m e ( v i ′ , v − i ) {\displaystyle x'=Outcome(v_{i}',v_{-i})} such that v i ( x ′ ) + P r i c e i ( x ′ , v − i ) > v i ( x ) + P r i c e i ( x , v − i ) {\displaystyle v_{i}(x')+Price_{i}(x',v_{-i})>v_{i}(x)+Price_{i}(x,v_{-i})} , then an agent with valuation v i {\displaystyle v_{i}} prefers to report v i ′ {\displaystyle v_{i}'} , since it gives him a larger total utility. Conditions 1 and 2 are not only necessary but also sufficient: any mechanism that satisfies conditions 1 and 2 is SP. PROOF: Fix an agent i {\displaystyle i} and valuations v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} . Denote: x := O u t c o m e ( v i , v − i ) {\displaystyle x:=Outcome(v_{i},v_{-i})} - the outcome when the agent acts truthfully. x ′ := O u t c o m e ( v i ′ , v − i ) {\displaystyle x':=Outcome(v_{i}',v_{-i})} - the outcome when the agent acts untruthfully. By property 1, the utility of the agent when playing truthfully is: u i ( v i ) = v i ( x ) + P r i c e i ( x , v − i ) {\displaystyle u_{i}(v_{i})=v_{i}(x)+Price_{i}(x,v_{-i})} and the utility of the agent when playing untruthfully is: u i ( v i ′ ) = v i ( x ′ ) + P r i c e i ( x ′ , v − i ) {\displaystyle u_{i}(v_{i}')=v_{i}(x')+Price_{i}(x',v_{-i})} By property 2: u i ( v i ) ≥ u i ( v i ′ ) {\displaystyle u_{i}(v_{i})\geq u_{i}(v_{i}')} so it is a dominant strategy for the agent to act truthfully. === Outcome-function characterization === The actual goal of a mechanism is its O u t c o m e {\displaystyle Outcome} function; the payment function is just a tool to induce the players to be truthful. Hence, it is useful to know, given a certain outcome function, whether it can be implemented using a SP mechanism or not (this property is also called implementability). The monotonicity property is necessary for strategyproofness. == Truthful mechanisms in single-parameter domains == A single-parameter domain is a game in which each player i {\displaystyle i} gets a certain positive value v i {\displaystyle v_{i}} for "winning" and a value 0 for "losing". A simple example is a single-item auction, in which v i {\displaystyle v_{i}} is the value that player i {\displaystyle i} assigns to the item. For this setting, it is easy to characterize truthful mechanisms. Begin with some definitions. A mechanism is called normalized if every losing bid pays 0. A mechanism is called monotone if, when a player raises his bid, his chances of winning (weakly) increase. For a monotone mechanism, for every player i and every combination of bids of the other players, there is a critical value in which the player switches from losing to winning. A normalized mechanism on a single-parameter domain is truthful if the following two conditions hold:: 229–230  The assignment function is monotone in each of the bids, and: Every winning bid pays the critical value. == Truthfulness of randomized mechanisms == There are various ways to extend the notion of truthfulness to randomized mechanisms. They are, from strongest to weakest:: 6–8  Universal truthfulness: for each randomization of the algorithm, the resulting mechanism is truthful. In other words: a universally-truthful mechanism is a randomization over deterministic truthful mechanisms, where the weights may be input-dependent. Strong stochastic-dominance truthfulness (strong-SD-truthfulness): The vector of probabilities that an agent receives by being truthful has first-order stochastic dominance over the vector of probabilities he gets by misreporting. That is: the probability of getting the top priority is at least as high AND the probability of getting one of the two top priorities is at least as high AND ... the probability of getting one of the m top priorities is at least as high. Lexicographic truthfulness (lex-truthfulness): The vector of probabilities that an agent receives by being truthful has lexicographic dominance over the vector of probabilities he gets by misreporting. That is: the probability of getting the top priority is higher OR (the probability of getting the top priority is equal and the probability of getting one of the two top priorities is higher) OR ... (the probability of getting the first m-1 priorities priority is equal and the probability of getting one of the m top priorities is higher) OR (all probabilities are equal). Weak stochastic-dominance truthfulness (weak-SD-truthfulness): The vector of probabilities that an agent receives by being truthful is not first-order-stochastically-dominated by the vector of probabilities he gets by misreporting. Universal implies strong-SD implies Lex implies weak-SD, and all implications are strict.: Thm.3.4  === Truthfulness with high probability === For every constant ϵ > 0 {\displaystyle \epsilon >0} , a randomized mechanism is called truthful with probability 1 − ϵ {\displaystyle 1-\epsilon } if for every agent and for every vector of bids, the probability that the agent benefits by bidding non-truthfully is at most ϵ {\displaystyle \epsilon } , where the probability is taken over the randomness of the mechanism.: 349  If the constant ϵ {\displaystyle \epsilon } goes to 0 when the number of bidders grows, then the mechanism is called truthful with high probability. This notion is weaker than full truthfulness, but it is still useful in some cases; see e.g. consensus estimate. == False-name-proofness == A new type of fraud that has become common with the abundance of internet-based auctions is false-name bids – bids submitted by a single bidder using multiple identifiers such as multiple e-mail addresses. False-name-proofness means that there is no incentive for any of the players to issue false-name-bids. This is a stronger notion than strategyproofness. In particular, the Vickrey–Clarke–Groves (VCG) auction is not false-name-proof. False-name-proofness is importantly different from group strategyproofness because it assumes that an individual alone can simulate certain behaviors that normally require the collusive coordination of multiple individuals. == See also == Incentive compatibility Individual rationality Participation criterion – a player cannot lose by playing the game (i.e. a player has no incentive to avoid playing the game) == Further reading == Parkes, David C. (2004), On Learnable Mechanism Design, in: Tumer, Kagan and David Wolpert (Eds.): Collectives and the Design of Complex Systems, New York u.a.O., pp. 107–133. On Asymptotic Strategy-Proofness of Classical Social Choice Rules An article by Arkadii Slinko about strategy-proofness in voting systems. == References ==
Wikipedia/Strategyproofness
Economic methodology is the study of methods, especially the scientific method, in relation to economics, including principles underlying economic reasoning. In contemporary English, 'methodology' may reference theoretical or systematic aspects of a method (or several methods). Philosophy and economics also takes up methodology at the intersection of the two subjects. == Scope == General methodological issues include similarities and contrasts to the natural sciences and to other social sciences and, in particular, to: the definition of economics the scope of economics as defined by its methods fundamental principles and operational significance of economic theory methodological individualism versus holism in economics the role of simplifying assumptions such as rational choice and profit maximizing in explaining or predicting phenomena descriptive/positive, prescriptive/normative, and applied uses of theory the scientific status and expanding domain of economics issues critical to the practice and progress of econometrics the balance of empirical and philosophical approaches the role of experiments in economics the role of mathematics and mathematical economics in economics the writing and rhetoric of economics the relation between theory, observation, application, and methodology in contemporary economics. Economic methodology has gone from periodic reflections of economists on method to a distinct research field in economics since the 1970s. In one direction, it has expanded to the boundaries of philosophy, including the relation of economics to the philosophy of science and the theory of knowledge. In another direction of philosophy and economics, additional subjects are treated including decision theory and ethics. == See also == Economic systems Methodology of econometrics Model (economics) == Notes == == References == John Bryan Davis, D. Wade Hands, Uskali Mäki (1998). Handbook of Economic Methodology, E. Elgar Hands, D. Wade, ed. (1993). The Philosophy And Methodology Of Economics, Duke University Hausman, Daniel M. (1984). The Philosophy of Economics: An Anthology. New York: Cambridge University Press, ISBN 052145929X Boland, L. (1982) The Foundations of Economic Method, London: Geo. Allen & Unwin. Boland, L. (1989) The Methodology of Economic Model Building: Methodology after Samuelson, London: Routledge. Boland, L. (1997) Critical Economic Methodology: A Personal Odyssey, London: Routledge Boland, L. (2003) The Foundations of Economic Method: A Popperian Perspective, London: Routledge D.N. McCloskey (1983). The Rhetoric of Economics, Univ of Wisconsin Press, 1998 Daniel M. Hausman (1992). Essays on Philosophy and Economic Methodology, Cambridge University Press, 1992 Nell, E.J. and Errouaki, K. (2011) Rational Econometric Man. Edward Elgar. Düppe, T. (2011). How Economic Methodology Became a Separate Science, Journal of Economic Methodology, 18 (2): 163-176. == External links == Journal of Economic Methodology - page @ EconPapers Daniel M. Hausman, Philosophy of Economics (with focus on methodology), Stanford Encyclopedia of Philosophy Milton Friedman, "The Methodology of Positive Economics" (excerpts)
Wikipedia/Economic_methodology
Philosophy of science is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science. Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself. While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period. Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience. Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations. == Introduction == === Defining science === Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience. Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle. An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated. === Scientific explanation === A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power. One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism. === Justifying science === Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning? One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time. Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Ockham's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy. === Observation inseparable from theory === When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden. All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden. === The purpose of science === Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion. ==== Real patterns ==== The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism. This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information. The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue. Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role. === Values and science === Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end. If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science. == History == === Pre-modern === The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time. === Modern === Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)—an allusion to Aristotle's Organon—Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation. === Logical positivism === Instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s. Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge. Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth. In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role. === Thomas Kuhn === In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a "paradigm", which he describes as "universally recognized achievements that for a time provide model problems and solutions to community of practitioners." A paradigm implicitly identifies the objects and relations under study and suggests what experiments, observations or theoretical improvements need to be carried out to produce a useful result. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Kurn was a historian of science, and his ideas were inspired by the study of older paradigms that have been discarded, such as Aristotelian mechanics or aether theory. These had often been portrayed by historians as using "unscientific" methods or beliefs. But careful examination showed that they were no less "scientific" than modern paradigms. Both were based on valid evidence, both failed to answer every possible question. A paradigm shift occurred when a significant number of observational anomalies arose in the old paradigm and efforts to resolve them within the paradigm were unsuccessful. A new paradigm was available that handled the anomalies with less difficulty and yet still covered (most of) the previous results. Over a period of time, often as long as a generation, more practitioners began working within the new paradigm and eventually the old paradigm was abandoned. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism; he wrote "terms like 'subjective' and 'intuitive' cannot be applied to [paradigms]." Paradigms are grounded in objective, observable evidence, but our use of them is psychological and our acceptance of them is social. == Current approaches == === Naturalism's axiomatic assumptions === According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: That there is an objective reality shared by all rational observers."The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else." That this objective reality is governed by natural laws; "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible." That reality can be discovered by means of systematic observation and experimentation.Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding." That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different." That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results. That experimenters won't be significantly biased by their presumptions. That random sampling is representative of the entire population.A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions. === Coherentism === In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system. According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the Solar System comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else. One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred. === Anything goes methodology === Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes". Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors. === Sociology of scientific knowledge methodology === According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct. Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities. However, some (such as Quine) do maintain that scientific reality is a social construct: Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits. The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars. A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies. Here the approach to the philosophy of science is to study how scientific communities actually operate. === Continental philosophy === Philosophers in the continental philosophical tradition are not traditionally categorized as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal. In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976). The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; German: Die Einheit der Natur (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986). == Other topics == === Reductionism === Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations. === Social accountability === A broad issue affecting the neutrality of science concerns the areas which science chooses to explore—that is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific. == Philosophy of particular sciences == There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science. === Philosophy of statistics === The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the null hypothesis is true. If the p-value is too high, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation. === Philosophy of mathematics === Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe. === Philosophy of physics === Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time). === Philosophy of chemistry === Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds. === Philosophy of astronomy === The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which Earth and the Solar System have within personal views of humanity's place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry. === Philosophy of Earth sciences === The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system. === Philosophy of biology === Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors in contrast to former narratives in which error replication events (mutations) dominated. === Philosophy of medicine === Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better. === Philosophy of psychiatry === Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology. === Philosophy of psychology === Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things. Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. === Philosophy of social science === The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive. Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology. The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself. === Philosophy of technology === The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology. == See also == == References == === Sources === == Further reading == == External links == Philosophy of science at PhilPapers Philosophy of science at the Indiana Philosophy Ontology Project "Philosophy of science". Internet Encyclopedia of Philosophy.
Wikipedia/Philosophy_of_science
In game theory, a strategy A dominates another strategy B if A will always produce a better result than B, regardless of how any other player plays. Some very simple games (called straightforward games) can be solved using dominance. == Terminology == A player can compare two strategies, A and B, to determine which one is better. The result of the comparison is one of: B strictly dominates (>) A: choosing B always gives a better outcome than choosing A, no matter what the other players do. B weakly dominates (≥) A: choosing B always gives at least as good an outcome as choosing A, no matter what the other players do, and there is at least one set of opponents' actions for which B gives a better outcome than A. (Notice that if B strictly dominates A, then B weakly dominates A. Therefore, we can say "B dominates A" to mean "B weakly dominates A".) B is weakly dominated by A: there is at least one set of opponents' actions for which B gives a worse outcome than A, while all other sets of opponents' actions give B the same payoff as A. (Strategy A weakly dominates B). B is strictly dominated by A: choosing B always gives a worse outcome than choosing A, no matter what the other player(s) do. (Strategy A strictly dominates B). Neither A nor B dominates the other: B and A are not equivalent, and B neither dominates, nor is dominated by, A. Choosing A is better in some cases, while choosing B is better in other cases, depending on exactly how the opponent chooses to play. For example, B is "throw rock" while A is "throw scissors" in Rock, Paper, Scissors. This notion can be generalized beyond the comparison of two strategies. Strategy B is strictly dominant if strategy B strictly dominates every other possible strategy. Strategy B is weakly dominant if strategy B weakly dominates every other possible strategy. Strategy B is strictly dominated if some other strategy exists that strictly dominates B. Strategy B is weakly dominated if some other strategy exists that weakly dominates B. Strategy: A complete contingent plan for a player in the game. A complete contingent plan is a full specification of a player's behavior, describing each action a player would take at every possible decision point. Because information sets represent points in a game where a player must make a decision, a player's strategy describes what that player will do at each information set. Rationality: The assumption that each player acts in a way that is designed to bring about what he or she most prefers given probabilities of various outcomes; von Neumann and Morgenstern showed that if these preferences satisfy certain conditions, this is mathematically equivalent to maximizing a payoff. A straightforward example of maximizing payoff is that of monetary gain, but for the purpose of a game theory analysis, this payoff can take any desired outcome—cash reward, minimization of exertion or discomfort, or promoting justice can all be modeled as amassing an overall “utility” for the player. The assumption of rationality states that players will always act in the way that best satisfies their ordering from best to worst of various possible outcomes. Common Knowledge: The assumption that each player has knowledge of the game, knows the rules and payoffs associated with each course of action, and realizes that every other player has this same level of understanding. This is the premise that allows a player to make a value judgment on the actions of another player, backed by the assumption of rationality, into consideration when selecting an action. == Dominance and Nash equilibria == If a strictly dominant strategy exists for one player in a game, that player will play that strategy in each of the game's Nash equilibria. If both players have a strictly dominant strategy, the game has only one unique Nash equilibrium, referred to as a "dominant strategy equilibrium". However, that Nash equilibrium is not necessarily "efficient", meaning that there may be non-equilibrium outcomes of the game that would be better for both players. The classic game used to illustrate this is the Prisoner's Dilemma. Strictly dominated strategies cannot be a part of a Nash equilibrium, and as such, it is irrational for any player to play them. On the other hand, weakly dominated strategies may be part of Nash equilibria. For instance, consider the payoff matrix pictured at the right. Strategy C weakly dominates strategy D. Consider playing C: If one's opponent plays C, one gets 1; if one's opponent plays D, one gets 0. Compare this to D, where one gets 0 regardless. Since in one case, one does better by playing C instead of D and never does worse, C weakly dominates D. Despite this, ⁠ ( D , D ) {\displaystyle (D,D)} ⁠ is a Nash equilibrium. Suppose both players choose D. Neither player will do any better by unilaterally deviating—if a player switches to playing C, they will still get 0. This satisfies the requirements of a Nash equilibrium. Suppose both players choose C. Neither player will do better by unilaterally deviating—if a player switches to playing D, they will get 0. This also satisfies the requirements of a Nash equilibrium. == Iterated elimination of strictly dominated strategies == The iterated elimination (or deletion, or removal) of dominated strategies (also denominated as IESDS, or IDSDS, or IRSDS) is one common technique for solving games that involves iteratively removing dominated strategies. In the first step, all dominated strategies are removed from the strategy space of each of the players, since no rational player would ever play these strategies. This results in a new, smaller game. Some strategies—that were not dominated before—may be dominated in the smaller game. The first step is repeated, creating a new even smaller game, and so on. This process is valid since it is assumed that rationality among players is common knowledge, that is, each player knows that the rest of the players are rational, and each player knows that the rest of the players know that he knows that the rest of the players are rational, and so on ad infinitum (see Aumann, 1976). == See also == Max-dominated strategy Risk dominance Winning strategy == References == Fudenberg, Drew; Tirole, Jean (1993). Game Theory. MIT Press. Gibbons, Robert (1992). Game Theory for Applied Economists. Princeton University Press. ISBN 0-691-00395-5. Gintis, Herbert (2000). Game Theory Evolving. Princeton University Press. ISBN 0-691-00943-0. Leyton-Brown, Kevin; Shoham, Yoav (2008). Essentials of Game Theory: A Concise, Multidisciplinary Introduction. San Rafael, CA: Morgan & Claypool Publishers. ISBN 978-1-59829-593-1.. An 88-page mathematical introduction; see Section 3.3. Free online at many universities. Rapoport, A. (1966). Two-Person Game Theory: The Essential Ideas. University of Michigan Press. Jim Ratliff's Game Theory Course: Strategic Dominance Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. New York: Cambridge University Press. ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; see Sections 3.4.3, 4.5. Downloadable free online. "Strict Dominance in Mixed Strategies – Game Theory 101". gametheory101.com. Retrieved 2021-12-17. Watson Joel. Strategy : An Introduction to Game Theory. Third ed. W.W. Norton & Company 2013. This article incorporates material from Dominant strategy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Strategic_dominance
Family therapy (also referred to as family counseling, family systems therapy, marriage and family therapy, couple and family therapy) is a branch of psychotherapy focused on families and couples in intimate relationships to nurture change and development. It tends to view change in terms of the systems of interaction between family members. The different schools of family therapy have in common a belief that, regardless of the origin of the problem, and regardless of whether the clients consider it an "individual" or "family" issue, involving families in solutions often benefits clients. This involvement of families is commonly accomplished by their direct participation in the therapy session. The skills of the family therapist thus include the ability to influence conversations in a way that catalyses the strengths, wisdom, and support of the wider system. In the field's early years, many clinicians defined the family in a narrow, traditional manner usually including parents and children. As the field has evolved, the concept of the family is more commonly defined in terms of strongly supportive, long-term roles and relationships between people who may or may not be related by blood or marriage. The conceptual frameworks developed by family therapists, especially those of family systems theorists, have been applied to a wide range of human behavior, including organisational dynamics and the study of greatness. == History and theoretical frameworks == Formal interventions with families to help individuals and families experiencing various kinds of problems have been a part of many cultures, probably throughout history. These interventions have sometimes involved formal procedures or rituals, and often included the extended family as well as non-kin members of the community (see for example Ho'oponopono). Following the emergence of specialization in various societies, these interventions were often conducted by particular members of a community – for example, a chief, priest, physician, and so on – usually as an ancillary function. Family therapy as a distinct professional practice within Western cultures can be argued to have had its origins in the social work movements of the 19th century in the United Kingdom and the United States. As a branch of psychotherapy, its roots can be traced somewhat later to the early 20th century with the emergence of the child guidance movement and marriage counseling. The formal development of family therapy dates from the 1940s and early 1950s with the founding in 1942 of the American Association of Marriage Counselors (the precursor of the AAMFT), and through the work of various independent clinicians and groups – in the United Kingdom (John Bowlby at the Tavistock Clinic), the United States (Donald deAvila Jackson, John Elderkin Bell, Nathan Ackerman, Christian Midelfort, Theodore Lidz, Lyman Wynne, Murray Bowen, Carl Whitaker, Virginia Satir, Ivan Boszormenyi-Nagy), and in Hungary, D.L.P. Liebermann – who began seeing family members together for observation or therapy sessions. There was initially a strong influence from psychoanalysis (most of the early founders of the field had psychoanalytic backgrounds) and social psychiatry, and later from learning theory and behavior therapy – and significantly, these clinicians began to articulate various theories about the nature and functioning of the family as an entity that was more than a mere aggregation of individuals. The movement received an important boost starting in the early 1950s through the work of anthropologist Gregory Bateson and colleagues – Jay Haley, Donald D. Jackson, John Weakland, William Fry, and later, Virginia Satir, Ivan Boszormenyi-Nagy, Paul Watzlawick and others – at Palo Alto in the United States, who introduced ideas from cybernetics and general systems theory into social psychology and psychotherapy, focusing in particular on the role of communication (see Bateson Project). This approach eschewed the traditional focus on individual psychology and historical factors – that involve so-called linear causation and content – and emphasized instead feedback and homeostatic mechanisms and "rules" in here-and-now interactions – so-called circular causation and process – that were thought to maintain or exacerbate problems, whatever the original cause(s). (See also systems psychology and systemic therapy.) This group was also influenced significantly by the work of US psychiatrist, hypnotherapist, and brief therapist Milton H. Erickson – especially his innovative use of strategies for change, such as paradoxical directives The members of the Bateson Project (like the founders of a number of other schools of family therapy, including Carl Whitaker, Murray Bowen, and Ivan Boszormenyi-Nagy) had a particular interest in the possible psychosocial causes and treatment of schizophrenia, especially in terms of the putative "meaning" and "function" of signs and symptoms within the family system. The research of psychiatrists and psychoanalysts Lyman Wynne and Theodore Lidz on communication deviance and roles (e.g., pseudo-mutuality, pseudo-hostility, schism and skew) in families of people with schizophrenia also became influential with systems-communications-oriented theorists and therapists. A related theme, applying to dysfunction and psychopathology more generally, was that of the "identified patient" or "presenting problem" as a manifestation of or surrogate for the family's, or even society's, problems. (See also double bind; family nexus.) By the mid-1960s, a number of distinct schools of family therapy had emerged. From those groups that were most strongly influenced by cybernetics and systems theory, there came MRI Brief Therapy, and slightly later, strategic therapy, Salvador Minuchin's structural family therapy and the Milan systems model. Partly in reaction to some aspects of these systemic models, came the experiential approaches of Virginia Satir and Carl Whitaker, which downplayed theoretical constructs, and emphasized subjective experience and unexpressed feelings (including the subconscious), authentic communication, spontaneity, creativity, total therapist engagement, and often included the extended family. Concurrently and somewhat independently, there emerged the various intergenerational therapies of Murray Bowen, Ivan Boszormenyi-Nagy, James Framo, and Norman Paul, which present different theories about the intergenerational transmission of health and dysfunction, but which all deal usually with at least three generations of a family (in person or conceptually), either directly in therapy sessions, or via "homework", "journeys home", etc. Psychodynamic family therapy – which, more than any other school of family therapy, deals directly with individual psychology and the unconscious in the context of current relationships – continued to develop through a number of groups that were influenced by the ideas and methods of Nathan Ackerman, and also by the British School of Object Relations and John Bowlby's work on attachment. Multiple-family group therapy, a precursor of psychoeducational family intervention, emerged, in part, as a pragmatic alternative form of intervention – especially as an adjunct to the treatment of serious mental disorders with a significant biological basis, such as schizophrenia – and represented something of a conceptual challenge to some of the systemic (and thus potentially "family-blaming") paradigms of pathogenesis that were implicit in many of the dominant models of family therapy. The late 1960s and early 1970s saw the development of network therapy (which bears some resemblance to traditional practices such as Ho'oponopono) by Ross Speck and Carolyn Attneave, and the emergence of behavioral marital therapy (renamed behavioral couples therapy in the 1990s) and behavioral family therapy as models in their own right. By the late 1970s, the weight of clinical experience – especially in relation to the treatment of serious mental disorders – had led to some revision of a number of the original models and a moderation of some of the earlier stridency and theoretical purism. There were the beginnings of a general softening of the strict demarcations between schools, with moves toward rapprochement, integration, and eclecticism – although there was, nevertheless, some hardening of positions within some schools. These trends were reflected in and influenced by lively debates within the field and critiques from various sources, including feminism and post-modernism, that reflected in part the cultural and political tenor of the times, and which foreshadowed the emergence (in the 1980s and 1990s) of the various post-systems constructivist and social constructionist approaches. While there was still debate within the field about whether, or to what degree, the systemic-constructivist and medical-biological paradigms were necessarily antithetical to each other (see also Anti-psychiatry; Biopsychosocial model), there was a growing willingness and tendency on the part of family therapists to work in multi-modal clinical partnerships with other members of the helping and medical professions. From the mid-1980s to the present, the field has been marked by a diversity of approaches that partly reflect the original schools, but which also draw on other theories and methods from individual psychotherapy and elsewhere – these approaches and sources include: brief therapy, structural therapy, constructivist approaches (e.g., Milan systems, post-Milan/collaborative/conversational, reflective), Bring forthism approach (e.g. Dr. Karl Tomm's IPscope model and Interventive interviewing), solution-focused therapy, narrative therapy, a range of cognitive and behavioral approaches, psychodynamic and object relations approaches, attachment and emotionally focused therapy, intergenerational approaches, network therapy, and multisystemic therapy (MST). Multicultural, intercultural, and integrative approaches are being developed, with Vincenzo Di Nicola weaving a synthesis of family therapy and transcultural psychiatry in his model of cultural family therapy, A Stranger in the Family: Culture, Families, and Therapy. Many practitioners claim to be eclectic, using techniques from several areas, depending upon their own inclinations and/or the needs of the client(s), and there is a growing movement toward a single "generic" family therapy that seeks to incorporate the best of the accumulated knowledge in the field and which can be adapted to many different contexts; however, there are still a significant number of therapists who adhere more or less strictly to a particular, or limited number of, approach(es). The Liberation Based Healing framework for family therapy offers a complete paradigm shift for working with families while addressing the intersections of race, class, gender identity, sexual orientation and other socio-political identity markers. This theoretical approach and praxis is informed by critical pedagogy, feminism, critical race theory, and decolonizing theory. This framework necessitates an understanding of the ways colonization, cis-heteronormativity, patriarchy, white supremacy and other systems of domination impact individuals, families and communities and centers the need to disrupt the status quo in how power operates. Traditional Western models of family therapy have historically ignored these dimensions and when white, male privilege has been critiqued, largely by feminist theory practitioners, it has often been to the benefit of middle-class, white women's experiences. While an understanding of intersectionality is of particular significance in working with families with violence, a liberatory framework examines how power, privilege and oppression operate within and across all relationships. Liberatory practices are based on the principles of critical consciousness, Accountability and Empowerment. These principles guide not only the content of the therapeutic work with clients but also the supervisory and training process of therapists. Dr. Rhea Almeida developed the cultural context model as a way to operationalize these concepts into practice through the integration of culture circles, sponsors, and a socio-educational process within the therapeutic work. Ideas and methods from family therapy have been influential in psychotherapy generally: a survey of over 2,500 US therapists in 2006 revealed that of the 10 most influential therapists of the previous quarter-century, three were prominent family therapists and that the marital and family systems model was the second most utilized model after cognitive behavioral therapy. == Techniques == Family therapy uses a range of counseling and other techniques including: Structural therapy – identifies and re-orders the organisation of the family system Strategic therapy – looks at patterns of interactions between family members Systemic/Milan therapy – focuses on belief systems Narrative therapy – restorying of dominant problem-saturated narrative, emphasis on context, separation of the problem from the person Transgenerational therapy – transgenerational transmission of unhelpful patterns of belief and behaviour IPscope model and Interventive Interviewing Communication theory Psychoeducation Psychotherapy Relationship counseling Relationship education Systemic coaching Systems theory Reality therapy the genogram The number of sessions depends on the situation, but the average is 5–20 sessions. A family therapist usually meets several members of the family at the same time. This has the advantage of making differences between the ways family members perceive mutual relations as well as interaction patterns in the session apparent both for the therapist and the family. These patterns frequently mirror habitual interaction patterns at home, even though the therapist is now incorporated into the family system. Therapy interventions usually focus on relationship patterns rather than on analyzing impulses of the unconscious mind or early childhood trauma of individuals as a Freudian therapist would do – although some schools of family therapy, for example psychodynamic and intergenerational, do consider such individual and historical factors (thus embracing both linear and circular causation) and they may use instruments such as the genogram to help to elucidate the patterns of relationship across generations. The distinctive feature of family therapy is its perspective and analytical framework rather than the number of people present at a therapy session. Specifically, family therapists are relational therapists: They are generally more interested in what goes on between individuals rather than within one or more individuals, although some family therapists – in particular those who identify as psychodynamic, object relations, intergenerational, or experiential family therapists (EFTs) – tend to be as interested in individuals as in the systems those individuals and their relationships constitute. Depending on the conflicts at issue and the progress of therapy to date, a therapist may focus on analyzing specific previous instances of conflict, as by reviewing a past incident and suggesting alternative ways family members might have responded to one another during it, or instead proceed directly to addressing the sources of conflict at a more abstract level, as by pointing out patterns of interaction that the family might have not noticed. Family therapists tend to be more interested in the maintenance and/or solving of problems rather than in trying to identify a single cause. Some families may perceive cause-effect analyses as attempts to allocate blame to one or more individuals, with the effect that for many families a focus on causation is of little or no clinical utility. It is important to note that a circular way of problem evaluation is used as opposed to a linear route. Using this method, families can be helped by finding patterns of behaviour, what the causes are, and what can be done to better their situation. == Evidence base == Family therapy has an evolving evidence base. A summary of current evidence is available via the UK's Association of Family Therapy. Evaluation and outcome studies can also be found on the Family Therapy and Systemic Research Centre website. The website also includes quantitative and qualitative research studies of many aspects of family therapy. According to a 2004 French government study conducted by French Institute of Health and Medical Research, family and couples therapy was the second most effective therapy after Cognitive behavioral therapy. The study used meta-analysis of over a hundred secondary studies to find some level of effectiveness that was either "proven" or "presumed" to exist. Of the treatments studied, family therapy was presumed or proven effective at treating schizophrenia, bipolar disorder, anorexia and alcohol dependency. == Concerns and criticism == In a 1999 address to the Coalition of Marriage, Family and Couples Education conference in Washington, D.C., University of Minnesota Professor William Doherty said: I take no joy in being a whistle blower, but it's time. I am a committed marriage and family therapist, having practiced this form of therapy since 1977. I train marriage and family therapists. I believe that marriage therapy can be very helpful in the hands of therapists who are committed to the profession and the practice. But there are a lot of problems out there with the practice of therapy – a lot of problems. Doherty suggested questions prospective clients should ask a therapist before beginning treatment: "Can you describe your background and training in marital therapy?" "What is your attitude toward salvaging a troubled marriage versus helping couples break up?" "What is your approach when one partner is seriously considering ending the marriage and the other wants to save it?" "What percentage of your practice is marital therapy?" "Of the couples you treat, what percentage would you say work out enough of their problems to stay married with a reasonable amount of satisfaction with the relationship." "What percentage break up while they are seeing you?" "What percentage do not improve?" "What do you think makes the differences in these results?" == Licensing and degrees == Family therapy practitioners come from a range of professional backgrounds, and some are specifically qualified or licensed/registered in family therapy (licensing is not required in some jurisdictions and requirements vary from place to place). In the United Kingdom, family therapists will have a prior relevant professional training in one of the helping professions usually psychologists, psychotherapists, or counselors who have done further training in family therapy, either a diploma or an M.Sc. In the United States there is a specific degree and license as a marriage and family therapist; however, psychologists, nurses, psychotherapists, social workers, or counselors, and other licensed mental health professionals may practice family therapy. In the UK, family therapists who have completed a four-year qualifying programme of study (MSc) are eligible to register with the professional body the Association of Family Therapy (AFT), and with the UK Council for Psychotherapy (UKCP). A master's degree is required to work as a Marriage and Family Therapist (MFT) in some American states. Most commonly, MFTs will first earn a M.S. or M.A. degree in marriage and family therapy, counseling, psychology, family studies, or social work. After graduation, prospective MFTs work as interns under the supervision of a licensed professional and are referred to as an MFTi. Prior to 1999 in California, counselors who specialized in this area were called Marriage, Family and Child Counselors. Today, they are known as Marriage and Family Therapists (MFT), and work variously in private practice, in clinical settings such as hospitals, institutions, or counseling organizations. Marriage and family therapists in the United States and Canada often seek degrees from accredited Masters or Doctoral programs recognized by the Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE), a division of the American Association of Marriage and Family Therapy. Requirements vary, but in most states about 3000 hours of supervised work as an intern are needed to sit for a licensing exam. MFTs must be licensed by the state to practice. Only after completing their education and internship and passing the state licensing exam can a person call themselves a Marital and Family Therapist and work unsupervised. License restrictions can vary considerably from state to state. Contact information about licensing boards in the United States are provided by the Association of Marital and Family Regulatory Boards. There have been concerns raised within the profession about the fact that specialist training in couples therapy – as distinct from family therapy in general – is not required to gain a license as an MFT or membership of the main professional body, the AAMFT. === Values and ethics === Since issues of interpersonal conflict, power, control, values, and ethics are often more pronounced in relationship therapy than in individual therapy, there has been debate within the profession about the different values that are implicit in the various theoretical models of therapy and the role of the therapist's own values in the therapeutic process, and how prospective clients should best go about finding a therapist whose values and objectives are most consistent with their own. An early paper on ethics in family therapy written by Vincenzo Di Nicola in consultation with a bioethicist asked basic questions about whether strategic interventions "mean what they say" and if it is ethical to invent opinions offered to families about the treatment process, such as statements saying that half of the treatment team believes one thing and half believes another. Specific issues that have emerged have included an increasing questioning of the longstanding notion of therapeutic neutrality, a concern with questions of justice and self-determination, connectedness and independence, functioning versus authenticity, and questions about the degree of the therapist's pro-marriage/family versus pro-individual commitment. The American Association for Marriage and Family Therapy requires members to adhere to a code of ethics, including a commitment to "continue therapeutic relationships only so long as it is reasonably clear that clients are benefiting from the relationship." == Founders and key influences == Some key developers of family therapy are: == Summary of theories and techniques == (references:) == Journals == Australian and New Zealand Journal of Family Therapy Contemporary Family Therapy Family Process Family Relations Family Relations, Interdisciplinary Journal of Applied Family Studies ISSN 0197-6664 Journal of Family Therapy Marriage Fitness Murmurations: Journal of Transformative Systemic Practice Sexual and Relationship Therapy Journal of Marital & Family Therapy Families, Systems and Health == See also == == Footnotes == == Further reading == Deborah Weinstein, The Pathological Family: Postwar America and the Rise of Family Therapy. Ithaca, NY: Cornell University Press, 2013. Satir, V., Banmen, J., Gerber, J., & Gomori, M. (1991). The Satir Model: Family Therapy and Beyond. Palo Alto, CA: Science and Behavior Books. The Systemic Thinking and Practice Series. Routledge Gehring, T. M., Debry, M. & Smith, P. K. (Eds.). (2016). The Family System Test FAST. Theory and application. Hove: Brunner-Routledge.
Wikipedia/Family_therapy
Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was initially published in December 1982 by Cambridge University Press. == Overview == In the book, John Maynard Smith summarises work on evolutionary game theory that had developed in the 1970s, to which he made several important contributions. The main contribution of the book is in introducing the concept of Evolutionarily Stable Strategy (ESS). ESS states that for a set of behaviours to be conserved over evolutionary time, they must be the most beneficial avenue of action when common, so that no alternative behaviour can invade. Supposing, for instance, that in a population of frogs, males fight to the death over breeding ponds. This would be an ESS if any one cowardly frog that does not fight to the death always fares worse (in terms of evolutionary survival fitness). A more likely scenario is one in which fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose. This frog would thus reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would easily be invaded by a mutation that causes this "informed fighting." Much complexity can develop from this. == Reception == == See also == Evolutionary biology == References == == External links == Cambridge University Press
Wikipedia/Evolution_and_the_Theory_of_Games
In game theory, a cooperative game (or coalitional game) is a game with groups of players who form binding “coalitions” with external enforcement of cooperative behavior (e.g. through contract law). This is different from non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing (e.g. through credible threats). Cooperative games are analysed by focusing on coalitions that can be formed, and the joint actions that groups can take and the resulting collective payoffs. == Mathematical definition == A cooperative game is given by specifying a value for every coalition. Formally, the coalitional game consists of a finite set of players N {\displaystyle N} , called the grand coalition, and a characteristic function v : 2 N → R {\displaystyle v:2^{N}\to \mathbb {R} } from the set of all possible coalitions of players to a set of payments that satisfies v ( ∅ ) = 0 {\displaystyle v(\emptyset )=0} . The function describes how much collective payoff a set of players can gain by forming a coalition. == Key attributes == Cooperative game theory is a branch of game theory that deals with the study of games where players can form coalitions, cooperate with one another, and make binding agreements. The theory offers mathematical methods for analysing scenarios in which two or more players are required to make choices that will affect other players wellbeing. Common interests: In cooperative games, players share a common interest in achieving a specific goal or outcome. The players must identify and agree on a common interest to establish the foundation and reasoning for cooperation. Once the players have a clear understanding of their shared interest, they can work together to achieve it. Necessary information exchange: Cooperation requires communication and information exchange among the players. Players must share information about their preferences, resources, and constraints to identify opportunities for mutual gain. By sharing information, players can better understand each other's goals and work towards achieving them together. Voluntariness, equality, and mutual benefit: In cooperative games, players voluntarily come together to form coalitions and make agreements. The players must be equal partners in the coalition, and any agreements must be mutually beneficial. Cooperation is only sustainable if all parties feel they are receiving a fair share of the benefits. Compulsory contract: In cooperative games, agreements between players are binding and mandatory. Once the players have agreed to a particular course of action, they have an obligation to follow through. The players must trust each other to keep their commitments, and there must be mechanisms in place to enforce the agreements. By making agreements binding and mandatory, players can ensure that they will achieve their shared goal. == Subgames == Let S ⊊ N {\displaystyle S\subsetneq N} be a non-empty coalition of players. The subgame v S : 2 S → R {\displaystyle v_{S}:2^{S}\to \mathbb {R} } on S {\displaystyle S} is naturally defined as v S ( T ) = v ( T ) , ∀ T ⊆ S . {\displaystyle v_{S}(T)=v(T),\forall ~T\subseteq S.} In other words, we simply restrict our attention to coalitions contained in S {\displaystyle S} . Subgames are useful because they allow us to apply solution concepts defined for the grand coalition on smaller coalitions. == Mathematical properties == === Superadditivity === Characteristic functions are often assumed to be superadditive (Owen 1995, p. 213). This means that the value of a union of disjoint coalitions is no less than the sum of the coalitions' separate values: v ( S ∪ T ) ≥ v ( S ) + v ( T ) {\displaystyle v(S\cup T)\geq v(S)+v(T)} whenever S , T ⊆ N {\displaystyle S,T\subseteq N} satisfy S ∩ T = ∅ {\displaystyle S\cap T=\emptyset } . === Monotonicity === Larger coalitions gain more: S ⊆ T ⇒ v ( S ) ≤ v ( T ) {\displaystyle S\subseteq T\Rightarrow v(S)\leq v(T)} . This follows from superadditivity. i.e. if payoffs are normalized so singleton coalitions have zero value. === Properties for simple games === A coalitional game v is considered simple if payoffs are either 1 or 0, i.e. coalitions are either "winning" or "losing". Equivalently, a simple game can be defined as a collection W of coalitions, where the members of W are called winning coalitions, and the others losing coalitions. It is sometimes assumed that a simple game is nonempty or that it does not contain an empty set. However, in other areas of mathematics, simple games are also called hypergraphs or Boolean functions (logic functions). A simple game W is monotonic if any coalition containing a winning coalition is also winning, that is, if S ∈ W {\displaystyle S\in W} and S ⊆ T {\displaystyle S\subseteq T} imply T ∈ W {\displaystyle T\in W} . A simple game W is proper if the complement (opposition) of any winning coalition is losing, that is, if S ∈ W {\displaystyle S\in W} implies N ∖ S ∉ W {\displaystyle N\setminus S\notin W} . A simple game W is strong if the complement of any losing coalition is winning, that is, if S ∉ W {\displaystyle S\notin W} implies N ∖ S ∈ W {\displaystyle N\setminus S\in W} . If a simple game W is proper and strong, then a coalition is winning if and only if its complement is losing, that is, S ∈ W {\displaystyle S\in W} iff N ∖ S ∉ W {\displaystyle N\setminus S\notin W} . (If v is a coalitional simple game that is proper and strong, v ( S ) = 1 − v ( N ∖ S ) {\displaystyle v(S)=1-v(N\setminus S)} for any S.) A veto player (vetoer) in a simple game is a player that belongs to all winning coalitions. Supposing there is a veto player, any coalition not containing a veto player is losing. A simple game W is weak (collegial) if it has a veto player, that is, if the intersection ⋂ W := ⋂ S ∈ W S {\displaystyle \bigcap W:=\bigcap _{S\in W}S} of all winning coalitions is nonempty. A dictator in a simple game is a veto player such that any coalition containing this player is winning. The dictator does not belong to any losing coalition. (Dictator games in experimental economics are unrelated to this.) A carrier of a simple game W is a set T ⊆ N {\displaystyle T\subseteq N} such that for any coalition S, we have S ∈ W {\displaystyle S\in W} iff S ∩ T ∈ W {\displaystyle S\cap T\in W} . When a simple game has a carrier, any player not belonging to it is ignored. A simple game is sometimes called finite if it has a finite carrier (even if N is infinite). The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection. According to Nakamura's theorem, the number measures the degree of rationality; it is an indicator of the extent to which an aggregation rule can yield well-defined choices. A few relations among the above axioms have widely been recognized, such as the following (e.g., Peleg, 2002, Section 2.1): If a simple game is weak, it is proper. A simple game is dictatorial if and only if it is strong and weak. More generally, a complete investigation of the relation among the four conventional axioms (monotonicity, properness, strongness, and non-weakness), finiteness, and algorithmic computability has been made (Kumabe and Mihara, 2011), whose results are summarized in the Table "Existence of Simple Games" below. The restrictions that various axioms for simple games impose on their Nakamura number were also studied extensively. In particular, a computable simple game without a veto player has a Nakamura number greater than 3 only if it is a proper and non-strong game. == Relation with non-cooperative theory == Let G be a strategic (non-cooperative) game. Then, assuming that coalitions have the ability to enforce coordinated behaviour, there are several cooperative games associated with G. These games are often referred to as representations of G. The two standard representations are: The α-effective game associates with each coalition the sum of gains its members can 'guarantee' by joining forces. By 'guaranteeing', it is meant that the value is the max-min, e.g. the maximal value of the minimum taken over the opposition's strategies. The β-effective game associates with each coalition the sum of gains its members can 'strategically guarantee' by joining forces. By 'strategically guaranteeing', it is meant that the value is the min-max, e.g. the minimal value of the maximum taken over the opposition's strategies. == Solution concepts == The main assumption in cooperative game theory is that the grand coalition N {\displaystyle N} will form. The challenge is then to allocate the payoff v ( N ) {\displaystyle v(N)} among the players in some way. (This assumption is not restrictive, because even if players split off and form smaller coalitions, we can apply solution concepts to the subgames defined by whatever coalitions actually form.) A solution concept is a vector x ∈ R N {\displaystyle x\in \mathbb {R} ^{N}} (or a set of vectors) that represents the allocation to each player. Researchers have proposed different solution concepts based on different notions of fairness. Some properties to look for in a solution concept include: Efficiency: The payoff vector exactly splits the total value: ∑ i ∈ N x i = v ( N ) {\displaystyle \sum _{i\in N}x_{i}=v(N)} . Individual rationality: No player receives less than what he could get on his own: x i ≥ v ( { i } ) , ∀ i ∈ N {\displaystyle x_{i}\geq v(\{i\}),\forall ~i\in N} . Existence: The solution concept exists for any game v {\displaystyle v} . Uniqueness: The solution concept is unique for any game v {\displaystyle v} . Marginality: The payoff of a player depends only on the marginal contribution of this player, i.e., if these marginal contributions are the same in two different games, then the payoff is the same: v ( S ∪ { i } ) = w ( S ∪ { i } ) , ∀ S ⊆ N ∖ { i } {\displaystyle v(S\cup \{i\})=w(S\cup \{i\}),\forall ~S\subseteq N\setminus \{i\}} implies that x i {\displaystyle x_{i}} is the same in v {\displaystyle v} and in w {\displaystyle w} . Monotonicity: The payoff of a player increases if the marginal contribution of this player increase: v ( S ∪ { i } ) ≤ w ( S ∪ { i } ) , ∀ S ⊆ N ∖ { i } {\displaystyle v(S\cup \{i\})\leq w(S\cup \{i\}),\forall ~S\subseteq N\setminus \{i\}} implies that x i {\displaystyle x_{i}} is weakly greater in w {\displaystyle w} than in v {\displaystyle v} . Computational ease: The solution concept can be calculated efficiently (i.e. in polynomial time with respect to the number of players | N | {\displaystyle |N|} .) Symmetry: The solution concept x {\displaystyle x} allocates equal payments x i = x j {\displaystyle x_{i}=x_{j}} to symmetric players i {\displaystyle i} , j {\displaystyle j} . Two players i {\displaystyle i} , j {\displaystyle j} are symmetric if v ( S ∪ { i } ) = v ( S ∪ { j } ) , ∀ S ⊆ N ∖ { i , j } {\displaystyle v(S\cup \{i\})=v(S\cup \{j\}),\forall ~S\subseteq N\setminus \{i,j\}} ; that is, we can exchange one player for the other in any coalition that contains only one of the players and not change the payoff. Additivity: The allocation to a player in a sum of two games is the sum of the allocations to the player in each individual game. Mathematically, if v {\displaystyle v} and ω {\displaystyle \omega } are games, the game ( v + ω ) {\displaystyle (v+\omega )} simply assigns to any coalition the sum of the payoffs the coalition would get in the two individual games. An additive solution concept assigns to every player in ( v + ω ) {\displaystyle (v+\omega )} the sum of what he would receive in v {\displaystyle v} and ω {\displaystyle \omega } . Zero Allocation to Null Players: The allocation to a null player is zero. A null player i {\displaystyle i} satisfies v ( S ∪ { i } ) = v ( S ) , ∀ S ⊆ N ∖ { i } {\displaystyle v(S\cup \{i\})=v(S),\forall ~S\subseteq N\setminus \{i\}} . In economic terms, a null player's marginal value to any coalition that does not contain him is zero. An efficient payoff vector is called a pre-imputation, and an individually rational pre-imputation is called an imputation. Most solution concepts are imputations. === The stable set === The stable set of a game (also known as the von Neumann-Morgenstern solution (von Neumann & Morgenstern 1944)) was the first solution proposed for games with more than 2 players. Let v {\displaystyle v} be a game and let x {\displaystyle x} , y {\displaystyle y} be two imputations of v {\displaystyle v} . Then x {\displaystyle x} dominates y {\displaystyle y} if some coalition S ≠ ∅ {\displaystyle S\neq \emptyset } satisfies x i > y i , ∀ i ∈ S {\displaystyle x_{i}>y_{i},\forall ~i\in S} and ∑ i ∈ S x i ≤ v ( S ) {\displaystyle \sum _{i\in S}x_{i}\leq v(S)} . In other words, players in S {\displaystyle S} prefer the payoffs from x {\displaystyle x} to those from y {\displaystyle y} , and they can threaten to leave the grand coalition if y {\displaystyle y} is used because the payoff they obtain on their own is at least as large as the allocation they receive under x {\displaystyle x} . A stable set is a set of imputations that satisfies two properties: Internal stability: No payoff vector in the stable set is dominated by another vector in the set. External stability: All payoff vectors outside the set are dominated by at least one vector in the set. Von Neumann and Morgenstern saw the stable set as the collection of acceptable behaviours in a society: None is clearly preferred to any other, but for each unacceptable behaviour there is a preferred alternative. The definition is very general allowing the concept to be used in a wide variety of game formats. ==== Properties ==== A stable set may or may not exist (Lucas 1969), and if it exists it is typically not unique (Lucas 1992). Stable sets are usually difficult to find. This and other difficulties have led to the development of many other solution concepts. A positive fraction of cooperative games have unique stable sets consisting of the core (Owen 1995, p. 240). A positive fraction of cooperative games have stable sets which discriminate n − 2 {\displaystyle n-2} players. In such sets at least n − 3 {\displaystyle n-3} of the discriminated players are excluded (Owen 1995, p. 240). === The core === Let v {\displaystyle v} be a game. The core of v {\displaystyle v} is the set of payoff vectors C ( v ) = { x ∈ R N : ∑ i ∈ N x i = v ( N ) ; ∑ i ∈ S x i ≥ v ( S ) , ∀ S ⊆ N } . {\displaystyle C(v)=\left\{x\in \mathbb {R} ^{N}:\sum _{i\in N}x_{i}=v(N);\quad \sum _{i\in S}x_{i}\geq v(S),\forall ~S\subseteq N\right\}.} In words, the core is the set of imputations under which no coalition has a value greater than the sum of its members' payoffs. Therefore, no coalition has incentive to leave the grand coalition and receive a larger payoff. ==== Properties ==== The core of a game may be empty (see the Bondareva–Shapley theorem). Games with non-empty cores are called balanced. If it is non-empty, the core does not necessarily contain a unique vector. The core is contained in any stable set, and if the core is stable it is the unique stable set; see (Driessen 1988) for a proof. === The core of a simple game with respect to preferences === For simple games, there is another notion of the core, when each player is assumed to have preferences on a set X {\displaystyle X} of alternatives. A profile is a list p = ( ≻ i p ) i ∈ N {\displaystyle p=(\succ _{i}^{p})_{i\in N}} of individual preferences ≻ i p {\displaystyle \succ _{i}^{p}} on X {\displaystyle X} . Here x ≻ i p y {\displaystyle x\succ _{i}^{p}y} means that individual i {\displaystyle i} prefers alternative x {\displaystyle x} to y {\displaystyle y} at profile p {\displaystyle p} . Given a simple game v {\displaystyle v} and a profile p {\displaystyle p} , a dominance relation ≻ v p {\displaystyle \succ _{v}^{p}} is defined on X {\displaystyle X} by x ≻ v p y {\displaystyle x\succ _{v}^{p}y} if and only if there is a winning coalition S {\displaystyle S} (i.e., v ( S ) = 1 {\displaystyle v(S)=1} ) satisfying x ≻ i p y {\displaystyle x\succ _{i}^{p}y} for all i ∈ S {\displaystyle i\in S} . The core C ( v , p ) {\displaystyle C(v,p)} of the simple game v {\displaystyle v} with respect to the profile p {\displaystyle p} of preferences is the set of alternatives undominated by ≻ v p {\displaystyle \succ _{v}^{p}} (the set of maximal elements of X {\displaystyle X} with respect to ≻ v p {\displaystyle \succ _{v}^{p}} ): x ∈ C ( v , p ) {\displaystyle x\in C(v,p)} if and only if there is no y ∈ X {\displaystyle y\in X} such that y ≻ v p x {\displaystyle y\succ _{v}^{p}x} . The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection. Nakamura's theorem states that the core C ( v , p ) {\displaystyle C(v,p)} is nonempty for all profiles p {\displaystyle p} of acyclic (alternatively, transitive) preferences if and only if X {\displaystyle X} is finite and the cardinal number (the number of elements) of X {\displaystyle X} is less than the Nakamura number of v {\displaystyle v} . A variant by Kumabe and Mihara states that the core C ( v , p ) {\displaystyle C(v,p)} is nonempty for all profiles p {\displaystyle p} of preferences that have a maximal element if and only if the cardinal number of X {\displaystyle X} is less than the Nakamura number of v {\displaystyle v} . (See Nakamura number for details.) === The strong epsilon-core === Because the core may be empty, a generalization was introduced in (Shapley & Shubik 1966). The strong ε {\displaystyle \varepsilon } -core for some number ε ∈ R {\displaystyle \varepsilon \in \mathbb {R} } is the set of payoff vectors C ε ( v ) = { x ∈ R N : ∑ i ∈ N x i = v ( N ) ; ∑ i ∈ S x i ≥ v ( S ) − ε , ∀ S ⊆ N } . {\displaystyle C_{\varepsilon }(v)=\left\{x\in \mathbb {R} ^{N}:\sum _{i\in N}x_{i}=v(N);\quad \sum _{i\in S}x_{i}\geq v(S)-\varepsilon ,\forall ~S\subseteq N\right\}.} In economic terms, the strong ε {\displaystyle \varepsilon } -core is the set of pre-imputations where no coalition can improve its payoff by leaving the grand coalition, if it must pay a penalty of ε {\displaystyle \varepsilon } for leaving. ε {\displaystyle \varepsilon } may be negative, in which case it represents a bonus for leaving the grand coalition. Clearly, regardless of whether the core is empty, the strong ε {\displaystyle \varepsilon } -core will be non-empty for a large enough value of ε {\displaystyle \varepsilon } and empty for a small enough (possibly negative) value of ε {\displaystyle \varepsilon } . Following this line of reasoning, the least-core, introduced in (Maschler, Peleg & Shapley 1979), is the intersection of all non-empty strong ε {\displaystyle \varepsilon } -cores. It can also be viewed as the strong ε {\displaystyle \varepsilon } -core for the smallest value of ε {\displaystyle \varepsilon } that makes the set non-empty (Bilbao 2000). === The Shapley value === The Shapley value is the unique payoff vector that is efficient, symmetric, and satisfies monotonicity. It was introduced by Lloyd Shapley (Shapley 1953) who showed that it is the unique payoff vector that is efficient, symmetric, additive, and assigns zero payoffs to dummy players. The Shapley value of a superadditive game is individually rational, but this is not true in general. (Driessen 1988) === The kernel === Let v : 2 N → R {\displaystyle v:2^{N}\to \mathbb {R} } be a game, and let x ∈ R N {\displaystyle x\in \mathbb {R} ^{N}} be an efficient payoff vector. The maximum surplus of player i over player j with respect to x is s i j v ( x ) = max { v ( S ) − ∑ k ∈ S x k : S ⊆ N ∖ { j } , i ∈ S } , {\displaystyle s_{ij}^{v}(x)=\max \left\{v(S)-\sum _{k\in S}x_{k}:S\subseteq N\setminus \{j\},i\in S\right\},} the maximal amount player i can gain without the cooperation of player j by withdrawing from the grand coalition N under payoff vector x, assuming that the other players in i's withdrawing coalition are satisfied with their payoffs under x. The maximum surplus is a way to measure one player's bargaining power over another. The kernel of v {\displaystyle v} is the set of imputations x that satisfy ( s i j v ( x ) − s j i v ( x ) ) × ( x j − v ( j ) ) ≤ 0 {\displaystyle (s_{ij}^{v}(x)-s_{ji}^{v}(x))\times (x_{j}-v(j))\leq 0} , and ( s j i v ( x ) − s i j v ( x ) ) × ( x i − v ( i ) ) ≤ 0 {\displaystyle (s_{ji}^{v}(x)-s_{ij}^{v}(x))\times (x_{i}-v(i))\leq 0} for every pair of players i and j. Intuitively, player i has more bargaining power than player j with respect to imputation x if s i j v ( x ) > s j i v ( x ) {\displaystyle s_{ij}^{v}(x)>s_{ji}^{v}(x)} , but player j is immune to player i's threats if x j = v ( j ) {\displaystyle x_{j}=v(j)} , because he can obtain this payoff on his own. The kernel contains all imputations where no player has this bargaining power over another. This solution concept was first introduced in (Davis & Maschler 1965). === Harsanyi dividend === The Harsanyi dividend (named after John Harsanyi, who used it to generalize the Shapley value in 1963) identifies the surplus that is created by a coalition of players in a cooperative game. To specify this surplus, the worth of this coalition is corrected by subtracting the surplus that was already created by subcoalitions. To this end, the dividend d v ( S ) {\displaystyle d_{v}(S)} of coalition S {\displaystyle S} in game v {\displaystyle v} is recursively determined by d v ( { i } ) = v ( { i } ) d v ( { i , j } ) = v ( { i , j } ) − d v ( { i } ) − d v ( { j } ) d v ( { i , j , k } ) = v ( { i , j , k } ) − d v ( { i , j } ) − d v ( { i , k } ) − d v ( { j , k } ) − d v ( { i } ) − d v ( { j } ) − d v ( { k } ) ⋮ d v ( S ) = v ( S ) − ∑ T ⊊ S d v ( T ) {\displaystyle {\begin{aligned}d_{v}(\{i\})&=v(\{i\})\\d_{v}(\{i,j\})&=v(\{i,j\})-d_{v}(\{i\})-d_{v}(\{j\})\\d_{v}(\{i,j,k\})&=v(\{i,j,k\})-d_{v}(\{i,j\})-d_{v}(\{i,k\})-d_{v}(\{j,k\})-d_{v}(\{i\})-d_{v}(\{j\})-d_{v}(\{k\})\\&\vdots \\d_{v}(S)&=v(S)-\sum _{T\subsetneq S}d_{v}(T)\end{aligned}}} An explicit formula for the dividend is given by d v ( S ) = ∑ T ⊆ S ( − 1 ) | S ∖ T | v ( T ) {\textstyle d_{v}(S)=\sum _{T\subseteq S}(-1)^{|S\setminus T|}v(T)} . The function d v : 2 N → R {\displaystyle d_{v}:2^{N}\to \mathbb {R} } is also known as the Möbius inverse of v : 2 N → R {\displaystyle v:2^{N}\to \mathbb {R} } . Indeed, we can recover v {\displaystyle v} from d v {\displaystyle d_{v}} by help of the formula v ( S ) = d v ( S ) + ∑ T ⊊ S d v ( T ) {\textstyle v(S)=d_{v}(S)+\sum _{T\subsetneq S}d_{v}(T)} . Harsanyi dividends are useful for analyzing both games and solution concepts, e.g. the Shapley value is obtained by distributing the dividend of each coalition among its members, i.e., the Shapley value ϕ i ( v ) {\displaystyle \phi _{i}(v)} of player i {\displaystyle i} in game v {\displaystyle v} is given by summing up a player's share of the dividends of all coalitions that she belongs to, ϕ i ( v ) = ∑ S ⊂ N : i ∈ S d v ( S ) / | S | {\textstyle \phi _{i}(v)=\sum _{S\subset N:i\in S}{d_{v}(S)}/{|S|}} . === The nucleolus === Let v : 2 N → R {\displaystyle v:2^{N}\to \mathbb {R} } be a game, and let x ∈ R N {\displaystyle x\in \mathbb {R} ^{N}} be a payoff vector. The excess of x {\displaystyle x} for a coalition S ⊆ N {\displaystyle S\subseteq N} is the quantity v ( S ) − ∑ i ∈ S x i {\displaystyle v(S)-\sum _{i\in S}x_{i}} ; that is, the gain that players in coalition S {\displaystyle S} can obtain if they withdraw from the grand coalition N {\displaystyle N} under payoff x {\displaystyle x} and instead take the payoff v ( S ) {\displaystyle v(S)} . The nucleolus of v {\displaystyle v} is the imputation for which the vector of excesses of all coalitions (a vector in R 2 N {\displaystyle \mathbb {R} ^{2^{N}}} ) is smallest in the leximin order. The nucleolus was introduced in (Schmeidler 1969). (Maschler, Peleg & Shapley 1979) gave a more intuitive description: Starting with the least-core, record the coalitions for which the right-hand side of the inequality in the definition of C ε ( v ) {\displaystyle C_{\varepsilon }(v)} cannot be further reduced without making the set empty. Continue decreasing the right-hand side for the remaining coalitions, until it cannot be reduced without making the set empty. Record the new set of coalitions for which the inequalities hold at equality; continue decreasing the right-hand side of remaining coalitions and repeat this process as many times as necessary until all coalitions have been recorded. The resulting payoff vector is the nucleolus. ==== Properties ==== Although the definition does not explicitly state it, the nucleolus is always unique. (See Section II.7 of (Driessen 1988) for a proof.) If the core is non-empty, the nucleolus is in the core. The nucleolus is always in the kernel, and since the kernel is contained in the bargaining set, it is always in the bargaining set (see (Driessen 1988) for details.) == Convex cooperative games == Introduced by Shapley in (Shapley 1971), convex cooperative games capture the intuitive property some games have of "snowballing". Specifically, a game is convex if its characteristic function v {\displaystyle v} is supermodular: v ( S ∪ T ) + v ( S ∩ T ) ≥ v ( S ) + v ( T ) , ∀ S , T ⊆ N . {\displaystyle v(S\cup T)+v(S\cap T)\geq v(S)+v(T),\forall ~S,T\subseteq N.} It can be shown (see, e.g., Section V.1 of (Driessen 1988)) that the supermodularity of v {\displaystyle v} is equivalent to v ( S ∪ { i } ) − v ( S ) ≤ v ( T ∪ { i } ) − v ( T ) , ∀ S ⊆ T ⊆ N ∖ { i } , ∀ i ∈ N ; {\displaystyle v(S\cup \{i\})-v(S)\leq v(T\cup \{i\})-v(T),\forall ~S\subseteq T\subseteq N\setminus \{i\},\forall ~i\in N;} that is, "the incentives for joining a coalition increase as the coalition grows" (Shapley 1971), leading to the aforementioned snowball effect. For cost games, the inequalities are reversed, so that we say the cost game is convex if the characteristic function is submodular. === Properties === Convex cooperative games have many nice properties: Supermodularity trivially implies superadditivity. Convex games are totally balanced: The core of a convex game is non-empty, and since any subgame of a convex game is convex, the core of any subgame is also non-empty. A convex game has a unique stable set that coincides with its core. The Shapley value of a convex game is the center of gravity of its core. An extreme point (vertex) of the core can be found in polynomial time using the greedy algorithm: Let π : N → N {\displaystyle \pi :N\to N} be a permutation of the players, and let S i = { j ∈ N : π ( j ) ≤ i } {\displaystyle S_{i}=\{j\in N:\pi (j)\leq i\}} be the set of players ordered 1 {\displaystyle 1} through i {\displaystyle i} in π {\displaystyle \pi } , for any i = 0 , … , n {\displaystyle i=0,\ldots ,n} , with S 0 = ∅ {\displaystyle S_{0}=\emptyset } . Then the payoff x ∈ R N {\displaystyle x\in \mathbb {R} ^{N}} defined by x i = v ( S π ( i ) ) − v ( S π ( i ) − 1 ) , ∀ i ∈ N {\displaystyle x_{i}=v(S_{\pi (i)})-v(S_{\pi (i)-1}),\forall ~i\in N} is a vertex of the core of v {\displaystyle v} . Any vertex of the core can be constructed in this way by choosing an appropriate permutation π {\displaystyle \pi } . === Similarities and differences with combinatorial optimization === Submodular and supermodular set functions are also studied in combinatorial optimization. Many of the results in (Shapley 1971) have analogues in (Edmonds 1970), where submodular functions were first presented as generalizations of matroids. In this context, the core of a convex cost game is called the base polyhedron, because its elements generalize base properties of matroids. However, the optimization community generally considers submodular functions to be the discrete analogues of convex functions (Lovász 1983), because the minimization of both types of functions is computationally tractable. Unfortunately, this conflicts directly with Shapley's original definition of supermodular functions as "convex". == The relationship between cooperative game theory and firm == Corporate strategic decisions can develop and create value through cooperative game theory. This means that cooperative game theory can become the strategic theory of the firm, and different CGT solutions can simulate different institutions. == See also == Consensus decision-making Coordination game Intra-household bargaining Hedonic game Linear production game Minimum-cost spanning tree game - a class of cooperative games. == References == === Further reading === Bilbao, Jesús Mario (2000), Cooperative Games on Combinatorial Structures, Kluwer Academic Publishers, ISBN 9781461543930 Davis, M.; Maschler, M. (1965), "The kernel of a cooperative game", Naval Research Logistics Quarterly, 12 (3): 223–259, doi:10.1002/nav.3800120303 Driessen, Theo (1988), Cooperative Games, Solutions and Applications, Kluwer Academic Publishers, ISBN 9789401577878 Edmonds, Jack (1970), "Submodular functions, matroids and certain polyhedra", in Guy, R.; Hanani, H.; Sauer, N.; Schönheim, J. (eds.), Combinatorial Structures and Their Applications, New York: Gordon and Breach, pp. 69–87 Lovász, László (1983), "Submodular functions and convexity", in Bachem, A.; Grötschel, M.; Korte, B. (eds.), Mathematical Programming—The State of the Art, Berlin: Springer, pp. 235–257 Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-59829-593-1. An 88-page mathematical introduction; see Chapter 8. Free online(subscription required) Archived 2000-08-15 at the Wayback Machine at many universities. Lucas, William F. (1969), "The Proof That a Game May Not Have a Solution", Transactions of the American Mathematical Society, 136: 219–229, doi:10.2307/1994798, JSTOR 1994798. Lucas, William F. (1992), "Von Neumann-Morgenstern Stable Sets", in Aumann, Robert J.; Hart, Sergiu (eds.), Handbook of Game Theory, Volume I, Amsterdam: Elsevier, pp. 543–590 Luce, R.D. and Raiffa, H. (1957) Games and Decisions: An Introduction and Critical Survey, Wiley & Sons. (see Chapter 8). Maschler, M.; Peleg, B.; Shapley, Lloyd S. (1979), "Geometric properties of the kernel, nucleolus, and related solution concepts", Mathematics of Operations Research, 4 (4): 303–338, doi:10.1287/moor.4.4.303 Osborne, M.J. and Rubinstein, A. (1994) A Course in Game Theory, MIT Press (see Chapters 13,14,15) Moulin, Herve (1988), Axioms of Cooperative Decision Making (1st ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-42458-5 Owen, Guillermo (1995), Game Theory (3rd ed.), San Diego: Academic Press, ISBN 978-0-12-531151-9 Schmeidler, D. (1969), "The nucleolus of a characteristic function game", SIAM Journal on Applied Mathematics, 17 (6): 1163–1170, doi:10.1137/0117107. Shapley, Lloyd S. (1953), "A value for n {\displaystyle n} -person games", in Kuhn, H.; Tucker, A.W. (eds.), Contributions to the Theory of Games II, Princeton, New Jersey: Princeton University Press, pp. 307–317 Shapley, Lloyd S. (18 March 1952), A Value for N-Person Games, Santa Monica, California: The RAND Corporation Shapley, Lloyd S. (1971), "Cores of convex games", International Journal of Game Theory, 1 (1): 11–26, doi:10.1007/BF01753431, S2CID 123385556 Shapley, Lloyd S.; Shubik, M. (1966), "Quasi-cores in a monetary economy with non-convex preferences", Econometrica, 34 (4): 805–827, doi:10.2307/1910101, JSTOR 1910101 Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; see Chapter 12. Downloadable free online. von Neumann, John; Morgenstern, Oskar (1944), "Theory of Games and Economic Behavior", Nature, 157 (3981), Princeton: Princeton University Press: 172, Bibcode:1946Natur.157..172R, doi:10.1038/157172a0, S2CID 29754824 Yeung, David W.K. and Leon A. Petrosyan. Cooperative Stochastic Differential Games (Springer Series in Operations Research and Financial Engineering), Springer, 2006. Softcover-ISBN 978-1441920942. Yeung, David W.K. and Leon A. Petrosyan. Subgame Consistent Economic Optimization: An Advanced Cooperative Dynamic Game Analysis (Static & Dynamic Game Theory: Foundations & Applications), Birkhäuser Boston; 2012. ISBN 978-0817682613 == External links == "Cooperative game", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Cooperative_game_theory
The optional prisoner's dilemma (OPD) game models a situation of conflict involving two players in game theory. It can be seen as an extension of the standard prisoner's dilemma game, where players have the option to "reject the deal", that is, to abstain from playing the game. This type of game can be used as a model for a number of real world situations in which agents are afforded the third option of abstaining from a game interaction such as an election. == Payoff matrix == The structure of the optional prisoner's dilemma can be generalized from the standard prisoner's dilemma game setting. In this way, suppose that the two players are represented by the colors, red and blue, and that each player chooses to "Cooperate", "Defect" or "Abstain". The payoff matrix for the game is shown below: If both players cooperate, they both receive the reward R for mutual cooperation. If both players defect, they both receive the punishment payoff P. If Blue defects while Red cooperates, then Blue receives the temptation payoff T, while Red receives the "sucker's" payoff, S. Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff S, while Red receives the temptation payoff T. If one or both players abstain, both receive the loner's payoff L. The following condition must hold for the payoffs: T > R > L > P > S == References ==
Wikipedia/Optional_prisoner's_dilemma
In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story. Imagine that a man and a woman hope to meet this evening, but have a choice between two events to attend: a prize fight and a ballet. The man would prefer to go to prize fight. The woman would prefer the ballet. Both would prefer to go to the same event rather than different ones. If they cannot communicate, where should they go? The payoff matrix labeled "Battle of the Sexes (1)" shows the payoffs when the man chooses a row and the woman chooses a column. In each cell, the first number represents the man's payoff and the second number the woman's. This standard representation does not account for the additional harm that might come from not only going to different locations, but going to the wrong one as well (e.g. the man goes to the ballet while the woman goes to the prize fight, satisfying neither). To account for this, the game would be represented in "Battle of the Sexes (2)", where in the top right box, the players each have a payoff of 1 because they at least get to attend their favored events. == Equilibrium analysis == This game has two pure strategy Nash equilibria, one where both players go to the prize fight, and another where both go to the ballet. There is also a mixed strategy Nash equilibrium, in which the players randomize using specific probabilities. For the payoffs listed in Battle of the Sexes (1), in the mixed strategy equilibrium the man goes to the prize fight with probability 3/5 and the woman to the ballet with probability 3/5, so they end up together at the prize fight with probability 6/25 = (3/5)(2/5) and together at the ballet with probability 6/25 = (2/5)(3/5). This presents an interesting case for game theory since each of the Nash equilibria is deficient in some way. The two pure strategy Nash equilibria are unfair; one player consistently does better than the other. The mixed strategy Nash equilibrium is inefficient: the players will miscoordinate with probability 13/25, leaving each player with an expected return of 6/5 (less than the payoff of 2 from each's less favored pure strategy equilibrium). It remains unclear how expectations would form that would result in a particular equilibrium being played out. One possible resolution of the difficulty involves the use of a correlated equilibrium. In its simplest form, if the players of the game have access to a commonly observed randomizing device, then they might decide to correlate their strategies in the game based on the outcome of the device. For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. Notice that once the results of the coin flip are revealed neither player has any incentives to alter their proposed actions if they believe the other will not. The result is that perfect coordination is always achieved and, prior to the coin flip, the expected payoffs for the players are exactly equal. It remains true, however, that even if there is a correlating device, the Nash equilibria in which the players ignore it will remain; correlated equilibria require both the existence of a correlating device and the expectation that both players will use it to make their decision. == Notes == == References == Fudenberg, D. and Tirole, J. (1991) Game theory, MIT Press. (see Chapter 1, section 2.4) Kelsey, D.; le Roux, S. (2015). "An Experimental Study on the Effect of Ambiguity in a Coordination Game". Theory and Decision. 79: 667–688. doi:10.1007/s11238-015-9483-2. hdl:10871/16743. == External links == GameTheory.net Cooperative Solution with Nash Function by Elmer G. Wiens
Wikipedia/Battle_of_the_sexes_(game_theory)
The Great Transformation is a book by Karl Polanyi, a Hungarian political economist. First published in 1944 by Farrar & Rinehart, it deals with the social and political upheavals that took place in England during the rise of the market economy. Polanyi contends that the modern market economy and the modern nation-state should be understood not as discrete elements but as a single human invention, which he calls the "Market Society". A distinguishing characteristic of the "Market Society" is that humanity's economic mentalities have been changed. Prior to this, people based their economies on reciprocity and redistribution across personal and communal relationships. As a consequence of industrialization and increasing state influence, competitive markets were created that undermined these previous social tendencies, replacing them with formal institutions that aimed to promote a self-regulating market economy. The expansion of capitalist institutions with an economically liberal mindset not only changed laws but also fundamentally altered humankind's economic relations; prior to this, markets played a very minor role in human affairs and were not even capable of setting prices because of their diminutive size. It was only after industrialization and the onset of greater state control over newly created market institutions that the myth of human nature's propensity toward rational free trade became widespread. However, Polanyi asserts instead that "man's economy, as a rule, is submerged in his social relationships," and he therefore proposes an alternative ethnographic economic approach called "substantivism", in opposition to "formalism", both terms coined by Polanyi in future work. On a broader theoretical level, The Great Transformation argues that markets cannot solely be understood through economic theory. Rather, markets are embedded in social and political logics, which makes it necessary for economic analysts to take into account politics when trying to understand the economy. For this reason, The Great Transformation is a key work in the fields of political economy and international political economy. == History == Polanyi began writing The Great Transformation in England in the late 1930s. He completed the book in the United States during World War II. He set out to explain the economic and social collapse of the 19th century, as well as the transformations that Polanyi had witnessed during the 20th century. == General argument == Polanyi argued that the development of the modern state went hand in hand with the development of modern market economies and that these two changes were inextricably linked in history. Essential to the change from a premodern economy to a market economy was the altering of human economic mentalities away from their grounding in local social relationships and institutions, and into transactions idealized as "rational" and set apart from their previous social context. Prior to the Market Society, markets had a very limited role in society and were confined almost entirely to long-distance trade. As Polanyi wrote, "the same bias which made Adam Smith's generation view primeval man as bent on barter and truck induced their successors to disavow all interest in early man, as he was now known not to have indulged in those laudable passions." The modern market economy was forced by the powerful modern state, which was needed to push changes in social structure, and in what aspects of human nature were amplified and encouraged, which allowed for a competitive capitalist economy to emerge. For Polanyi, these changes implied the destruction of the basic social order that had reigned throughout pre-modern history. Central to the change was that factors of production, such as land and labor, would now be sold on the market at market-determined prices instead of allocated according to tradition, redistribution, or reciprocity. This was both a change of human institutions and human nature. His empirical case in large part relied upon analysis of the Speenhamland laws, which he saw not only as the last attempt of the squirearchy to preserve the traditional system of production and social order but also a self-defensive measure on the part of society that mitigated the disruption of the most violent period of economic change. Polanyi also remarks that the pre-modern economies of China, the Incan Empire, the Indian Empires, Babylon, Greece, and the various kingdoms of Africa operated on principles of reciprocity and redistribution with a very limited role for markets, especially in settling prices or allocating the factors of production. The book also presented his belief that market society is unsustainable because it is fatally destructive to human nature and the natural contexts it inhabits. Polanyi attempted to turn the tables on the orthodox liberal account of the rise of capitalism by arguing that "laissez-faire was planned", whereas social protectionism was a spontaneous reaction to the social dislocation imposed by an unrestrained free market. He argues that the construction of a "self-regulating" market necessitates the separation of society into economic and political realms. Polanyi does not deny that the self-regulating market has brought "unheard of material wealth", but he suggests that this is too narrow a focus. The market, once it considers land, labor and money as fictitious commodities, and including them "means to subordinate the substance of society itself to the laws of the market." This, he argues, results in massive social dislocation, and spontaneous moves by society to protect itself. In effect, Polanyi argues that once the free market attempts to separate itself from the fabric of society, social protectionism is society's natural response, which he calls the "double movement." Polanyi did not see economics as a subject closed off from other fields of enquiry, indeed he saw economic and social problems as inherently linked. He ended his work with a prediction of a socialist society, noting, "after a century of blind 'improvement', man is restoring his 'habitation.'" === Gold standard === According to James Ashley Morrison, Polanyi offers a prominent argument in the field of political economy for Britain's decision to depart from the gold standard. Polanyi argues that Britain went off the gold standard due to both deteriorating international economic conditions and pressures from labor, which had grown stronger over time. In 1931, the Labour Party found itself faced with a dire dilemma: either reduce social services or let currency exchange rates collapse. Since it could not decide on one alternative or the other, there was a government crisis, and the "traditional parties" decided both to cut social services and to abolish the gold standard. Labor opposed the gold standard because maintaining it meant that the British government had to implement austerity. James Ashley Morrison found many later explanations for the collapse of the gold standard very much resemble the Polanyian argument, which he summarized as follows: Developments in the global economy, particularly after World War I, made maintaining the gold standard increasingly painful. Diminished international cooperation combined with Britain’s relative economic decline to exacerbate its difficulties. At the same time, a newly empowered working class harnessed evolving “social purpose” to resist the austerity necessary to defend gold. == Before the market society == Based on Bronislaw Malinowski's ethnological work on the Kula ring exchange in the Trobriand Islands, Polanyi makes the distinction between markets as an auxiliary tool for ease of exchange of goods and market societies. Market societies are those where markets are the paramount institution for the exchange of goods through price mechanisms. Polanyi argues that there are three general types of economic systems that existed before the rise of a market society: reciprocity, redistribution, and householding: Reciprocity: exchange of goods is based on reciprocal exchanges between social entities. On a macro level, this would include the production of goods to gift to other groups. Redistribution: trade and production is focused to a central entity such as a tribal leader or feudal lord and then redistributed to members of their society. Householding: economies where production is centered on individual households. Family units produce food, textile goods, and tools for their own use and consumption. These three forms were not mutually exclusive, nor were they mutually exclusive of markets for the exchange of goods. The main distinction is that these three forms of economic organization were based around the social aspects of the society they operated in and were explicitly tied to those social relationships. Polanyi argued that these economic forms depended on the social principles of symmetry, centricity, and autarchy (self-sufficiency). Markets existed as an auxiliary avenue for the exchange of goods that were otherwise not obtainable. == Reception == The book has influenced scholars such as Marshall Sahlins, Immanuel Wallerstein, James C. Scott, E.P. Thompson, and Douglass North. John Ruggie, who called the Great Transformation a "magisterial work", was influenced by the work in coining the term Embedded liberalism for the Bretton Woods system of the post-World War II period. The sociologists Fred L. Block and Margaret Somers argue that Polanyi's analysis could help explain why the resurgence of free market ideas has resulted in "such manifest failures as persistent unemployment, widening inequality, and the severe financial crises that have stressed Western economies over the past forty years." They suggest that "the ideology that free markets can replace government is just as utopian and dangerous" as the idea that Communism will result in the withering away of the state. In Toward an Anthropological Theory of Value: The False Coin of Our Own Dreams, anthropologist David Graeber offers compliments to Polanyi's text and theories. Graeber attacks formalists and substantivists alike: "those who start by looking at society as a whole are left, like the Substantivists, trying to explain how people are motivated to reproduce society; those who start by looking at individual desires, like the formalists, unable to explain why people chose to maximize some things and not others (or otherwise to account for questions of meaning)." While appreciative of Polanyi's attack on formalism, Graeber attempts to move beyond ethnography and towards understanding how individuals find meaning in their actions, synthesizing insights of Marcel Mauss, Karl Marx, and others. In parallel with Polanyi's account of markets being made internal to society as a result of state intervention, Graeber argues the transition to credit-based markets from societies with separated "spheres of exchange" in gift giving was likely the accidental byproduct of state or temple bureaucracy (temple in the case of Sumer).: 39–40  Graeber also notes that the criminalization of debt supplemented the enclosure movements in the destruction of English communities, since credit between community members had originally reinforced communal ties prior to state intervention: The criminalization of debt, then, was the criminalization of the very basis of human society. It cannot be overemphasized that in a small community, everyone normally was both lender and borrower. One can only imagine the tensions and temptations that must have existed in a community—and communities, much though they are based on love, in fact, because they are based on love, will always also be full of hatred, rivalry and passion—when it became clear that with sufficiently clever scheming, manipulation, and perhaps a bit of strategic bribery, they could arrange to have almost anyone they hated imprisoned or even hanged. Economist Joseph Stiglitz favors Polanyi's account of market liberalization, arguing that the failures of "Shock Therapy" in Russia and the failures of IMF reform packages echo Polanyi's arguments. Stiglitz also summarizes the difficulties of "market liberalization" in that it requires unrealistic "flexibility" amongst the poor. Charles Kindleberger praised the book, saying it "is a useful corrective to the economic interpretation of the world, and should be read more and more by economists, particularly those of the Chicago school." He did however propose that he would "continue to quote Polanyi. Though not necessarily to believe everything he says." Polanyi's argument is often cited as the "Polanyian moment", "Polanyi Moment" or "Polanyi's moment", which indicates the time when social protectionism starts to surpass marketization and thus reversing the direction of the double movement. This term has been used to describe the situation after the Great Recession in 2008 the COVID-19 pandemic. Gemici compared the Polanyi Moment to the Minsky moment, the moment of a sudden collapse in the market. === Criticism === Rutger Bregman, writing for Jacobin, criticized Polanyi's account of the Speenhamland system as reliant on several myths (increased poverty, increased population growth and increased unrest, as well as "'the pauperization of the masses,' who 'almost lost their human shape';" "basic income did not introduce a floor, he contended, but a ceiling") and the flawed Royal Commission into the Operation of the Poor Laws 1832. Both Bregman and Corey Robin credited Polanyi's view with Richard Nixon moving away from a proposed basic income system because Polanyi was heavily quoted in a report by Nixon's aide, Martin Anderson, then ultimately providing arguments for various reductions in the welfare state introduced by Ronald Reagan, Bill Clinton and George W. Bush. Economic historians (e.g. Douglass North) have criticized Polanyi's account of the origins of capitalism. Polanyi's account of reciprocity and redistributive systems is inherently changeless and thus cannot explain the emergence of the more specific form of modern capitalism in the 19th century. Deirdre McCloskey has criticized several aspects of the Great Transformation. She notes that Polanyi's account of "pre-market" societies are inconsistent with anthropological evidence which suggests these societies were not as equitable, socially stable, and successful as Polanyi makes them appear to be. McCloskey notes that market-based societies are not a nascent invention, as Polanyi claims, but that they extend further back in time. She also criticizes Polanyi's conceptualization of self-regulating markets whereby any and all government intervention in the markets means the markets are no longer markets. The Great Transformation has been criticized for underplaying power and class relations in its analysis. Polanyi argued, "class interests offer only a limited explanation of long-run movements in society." He argued that while humans are "naturally conditioned by economic factors", human motives are only rarely determined by "material want-satisfaction"; rather, human motives were more social (e.g. desire for security and status) than material. == Contents == Part One The International System Chapter 1. The Hundred Years' Peace Chapter 2. Conservative Twenties, Revolutionary Thirties Part Two Rise and Fall of Market Economy I. Satanic Mill Chapter 3. "Habitation versus Improvement" Chapter 4. Societies and Economic Systems Chapter 5. Evolution of the Market Pattern Chapter 6. The Self-regulating Market and the Fictitious Commodities: Labor, Land, and Money Chapter 7. Speenhamland, 1795 Chapter 8. Antecedents and Consequences Chapter 9. Pauperism and Utopia Chapter 10. Political Economy and the Discovery of Society II. Self-Protection of Society Chapter 11. Man, Nature, and Productive Organization Chapter 12. Birth of the Liberal Creed Chapter 13. Birth of the Liberal Creed (Continued): Class Interest and Social Change Chapter 14. Market and Man Chapter 15. Market and Nature Chapter 16. Market and Productive Organization Chapter 17. Self-Regulation Impaired Chapter 18. Disruptive Strains Part Three Transformation in Progress Chapter 19. Popular Government and Market Economy Chapter 20. History in the Gear of Social Change Chapter 21. Freedom in a Complex Society Notes on Sources Balance of Power Hundred Years' Peace The Snapping of the Golden Thread Swings of the Pendulum after World War I Finance and Peace Selected References to "Societies and Economic Systems" Selected References to "Evolution of the Market Pattern" The Literature of Speenhamland Speenhamland and Vienna Why Not Whitbread's Bill? Disraeli's "Two Nations" and the Problem of Colored Races Additional Note: Poor Law and the Organization of Labor == Editions == The book was originally published in the United States in 1944 and then in England in 1945 as The Origins of Our Time. It was reissued by Beacon Press as a paperback in 1957 and as a 2nd edition with a foreword by Nobel Prize-winning economist Joseph Stiglitz in 2001. Polanyi, K. (1944). The Great Transformation. Foreword by Robert M. MacIver. New York: Farrar & Rinehart. Polanyi, K. (1957). The Great Transformation. Foreword by Robert M. MacIver. Boston: Beacon Press. ISBN 9780807056790. Polanyi, K. (2001). The Great Transformation: The Political and Economic Origins of Our Time, 2nd ed. Foreword by Joseph E. Stiglitz; introduction by Fred Block. Boston: Beacon Press. ISBN 9780807056431. Polanyi, K. (2024). The Great Transformation: The Political and Economic Origins of Our Time, with introduction by Gareth Dale. Penguin Books. ISBN 9780241685556 == See also == Capitalism Capitalist mode of production Economic anthropology Economic sociology Political economy Formalist–substantivist debate == Notes == == References == Books Block, F., & Somers, M. R. (2014). The Power of Market Fundamentalism: Karl Polanyi's Critique. Harvard University Press. ISBN 0674050711 Polanyi, K. (1977). The Livelihood of Man: Studies in Social Discontinuity. New York: Academic Press David Graeber, Toward an Anthropological Theory of Value; The False Coin of Our Own Dreams, Palgrave, New York, 2001 David Graeber, Debt: The First 5000 Years (Brooklyn, NY: Melville House Publishing, 2011. Pp. 534. ISBN 9781933633862 Hbk. £55/US $32) Articles Block, F. (2003). Karl Polanyi and the Writing of "The Great Transformation". Theory and Society, 32, June, 3, 275–306. Clough, S. B., & Polanyi, K. (1944). Review of The Great Transformation. The Journal of Modern History, 16, December, 4, 313–314. Review of The Great Transformation from Economic History Services Markets and Other Allocation Systems in History: The Challenge of Karl Polanyi Karl Polanyi's Battle with Economic History. Libertarianism.org The free market is an impossible utopia (18 July 2014), The Washington Post Something That Changed My Perspective: Karl Polanyi’s The Great Transformation (2 January 2015), Naked Capitalism == External links == Excerpt from Chapter 4, Societies and Economic Systems, of The Great Transformation The Karl Polanyi Archive – Concordia University, Montreal Karl Polanyi, The Great Transformation (1957 edition), (2001 edition), at the Internet Archive
Wikipedia/The_Great_Transformation_(book)
In game theory, differential games are dynamic games that unfold in continuous time, meaning players’ actions and outcomes evolve smoothly rather than in discrete steps, and for which the rate of change of each state variable—like position, speed, or resource level—is governed by a differential equation. This distinguishes them from turn-based games (sequential games) like chess, focusing instead on real-time strategic conflicts. Differential games are sometimes called continuous-time games, a broader term that includes them. While the two overlap significantly, continuous-time games also encompass models not governed by differential equations, such as those with stochastic jump processes, where abrupt, unpredictable events introduce discontinuities Early differential games, often inspired by military scenarios, modeled situations like a pursuer chasing an evader, such as a missile targeting an aircraft. Today, they also apply to fields like economics and engineering, analyzing competition over resources or the control of moving systems. == Connection to optimal control == Differential games are related closely with optimal control problems. In an optimal control problem there is single control u ( t ) {\displaystyle u(t)} and a single criterion to be optimized; differential game theory generalizes this to two controls u 1 ( t ) , u 2 ( t ) {\displaystyle u_{1}(t),u_{2}(t)} and two criteria, one for each player. Each player attempts to control the state of the system so as to achieve its goal; the system responds to the inputs of all players. == History == In the study of competition, differential games have been employed since a 1925 article by Charles F. Roos. The first to study the formal theory of differential games was Rufus Isaacs, publishing a text-book treatment in 1965. One of the first games analyzed was the 'homicidal chauffeur game'. == Random time horizon == Games with a random time horizon are a particular case of differential games. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectancy of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval == Applications == Differential games have been applied to economics. Recent developments include adding stochasticity to differential games and the derivation of the stochastic feedback Nash equilibrium (SFNE). A recent example is the stochastic differential game of capitalism by Leong and Huang (2010). In 2016 Yuliy Sannikov received the John Bates Clark Medal from the American Economic Association for his contributions to the analysis of continuous-time dynamic games using stochastic calculus methods. Additionally, differential games have applications in missile guidance and autonomous systems. For a survey of pursuit–evasion differential games see Pachter. == See also == Lotka–Volterra equations Mean-field game theory == Notes == == Further reading == Dockner, Engelbert; Jorgensen, Steffen; Long, Ngo Van; Sorger, Gerhard (2001), Differential Games in Economics and Management Science, Cambridge University Press, ISBN 978-0-521-63732-9 Petrosyan, Leon (1993), Differential Games of Pursuit, Series on Optimization, vol. 2, World Scientific Publishers, ISBN 978-981-02-0979-7 == External links == Bressan, Alberto (December 8, 2010). "Noncooperative Differential Games: A Tutorial" (PDF). Department of Mathematics, Penn State University.
Wikipedia/Differential_game
In economics, general equilibrium theory attempts to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that the interaction of demand and supply will result in an overall general equilibrium. General equilibrium theory contrasts with the theory of partial equilibrium, which analyzes a specific part of an economy while its other factors are held constant. General equilibrium theory both studies economies using the model of equilibrium pricing and seeks to determine in which circumstances the assumptions of general equilibrium will hold. The theory dates to the 1870s, particularly the work of French economist Léon Walras in his pioneering 1874 work Elements of Pure Economics. The theory reached its modern form with the work of Lionel W. McKenzie (Walrasian theory), Kenneth Arrow and Gérard Debreu (Hicksian theory) in the 1950s. == Overview == Broadly speaking, general equilibrium tries to give an understanding of the whole economy using a "bottom-up" approach, starting with individual markets and agents. Therefore, general equilibrium theory has traditionally been classified as part of microeconomics. The difference is not as clear as it used to be, since much of modern macroeconomics has emphasized microeconomic foundations, and has constructed general equilibrium models of macroeconomic fluctuations. General equilibrium macroeconomic models usually have a simplified structure that only incorporates a few markets, like a "goods market" and a "financial market". In contrast, general equilibrium models in the microeconomic tradition typically involve a multitude of different goods markets. They are usually complex and require computers to calculate numerical solutions. In a market system the prices and production of all goods, including the price of money and interest, are interrelated. A change in the price of one good, say bread, may affect another price, such as bakers' wages. If bakers don't differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available. It is often assumed that agents are price takers, and under that assumption two common notions of equilibrium exist: Walrasian, or competitive equilibrium, and its generalization: a price equilibrium with transfers. === Walrasian equilibrium === The first attempt in neoclassical economics to model prices for a whole economy was made by Léon Walras. Walras' Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Some think Walras was unsuccessful and that the later models in this series are inconsistent. In particular, Walras's model was a long-run model in which prices of capital goods are the same whether they appear as inputs or outputs and in which the same rate of profits is earned in all lines of industry. This is inconsistent with the quantities of capital goods being taken as data. But when Walras introduced capital goods in his later models, he took their quantities as given, in arbitrary ratios. (In contrast, Kenneth Arrow and Gérard Debreu continued to take the initial quantities of capital goods as given, but adopted a short run model in which the prices of capital goods vary with time and the own rate of interest varies across capital goods.) Walras was the first to lay down a research program widely followed by 20th-century economists. In particular, the Walrasian agenda included the investigation of when equilibria are unique and stable— Walras' Lesson 7 shows neither uniqueness, nor stability, nor even existence of an equilibrium is guaranteed. Walras also proposed a dynamic process by which general equilibrium might be reached, that of the tâtonnement or groping process. The tâtonnement process is a model for investigating stability of equilibria. Prices are announced (perhaps by an "auctioneer"), and agents state how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply. Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium where demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question (see Unresolved Problems in General Equilibrium below). === Marshall and Sraffa === In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good. If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first-order approximation, firms in the industry will experience constant costs, and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit increasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, Sraffa argued, the first-order effects of a shift in the demand curve of the original industry under these assumptions includes a shift in the supply curve of substitutes for that industry's product, and consequent shifts in the original industry's supply curve. General equilibrium is designed to investigate such interactions between markets. Continental European economists made important advances in the 1930s. Walras' arguments for the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling. == Modern concept of general equilibrium in economics == The modern conception of general equilibrium is provided by the Arrow–Debreu–McKenzie model, developed jointly by Kenneth Arrow, Gérard Debreu, and Lionel W. McKenzie in the 1950s. Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Nicolas Bourbaki. In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms. Three important interpretations of the terms of the theory have been often cited. First, suppose commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade. Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibrate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow–Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates. Third, suppose contracts specify states of nature which affect whether a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..." These interpretations can be combined. So the complete Arrow–Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies, however, its proponents argue that it is still useful as a simplified guide as to how real economies function. Some of the recent work in general equilibrium has in fact explored the implications of incomplete markets, which is to say an intertemporal economy with uncertainty, where there do not exist sufficiently detailed contracts that would allow agents to fully allocate their consumption and resources through time. While it has been shown that such economies will generally still have an equilibrium, the outcome may no longer be Pareto optimal. The basic intuition for this result is that if consumers lack adequate means to transfer their wealth from one time period to another and the future is risky, there is nothing to necessarily tie any price ratio down to the relevant marginal rate of substitution, which is the standard requirement for Pareto optimality. Under some conditions the economy may still be constrained Pareto optimal, meaning that a central authority limited to the same type and number of contracts as the individual agents may not be able to improve upon the outcome, what is needed is the introduction of a full set of possible contracts. Hence, one implication of the theory of incomplete markets is that inefficiency may be a result of underdeveloped financial institutions or credit constraints faced by some members of the public. Research still continues in this area. == Properties and characterization of general equilibrium == Basic questions in general equilibrium analysis are concerned with the conditions under which an equilibrium will be efficient, which efficient equilibria can be achieved, when an equilibrium is guaranteed to exist and when the equilibrium will be unique and stable. === First Fundamental Theorem of Welfare Economics === The First Fundamental Welfare Theorem asserts that market equilibria are Pareto efficient. In other words, the allocation of goods in the equilibria is such that there is no reallocation which would leave a consumer better off without leaving another consumer worse off. In a pure exchange economy, a sufficient condition for the first welfare theorem to hold is that preferences be locally nonsatiated. The first welfare theorem also holds for economies with production regardless of the properties of the production function. Implicitly, the theorem assumes complete markets and perfect information. In an economy with externalities, for example, it is possible for equilibria to arise that are not efficient. The first welfare theorem is informative in the sense that it points to the sources of inefficiency in markets. Under the assumptions above, any market equilibrium is tautologically efficient. Therefore, when equilibria arise that are not efficient, the market system itself is not to blame, but rather some sort of market failure. === Second Fundamental Theorem of Welfare Economics === Even if every equilibrium is efficient, it may not be that every efficient allocation of resources can be part of an equilibrium. However, the second theorem states that every Pareto efficient allocation can be supported as an equilibrium by some set of prices. In other words, all that is required to reach a particular Pareto efficient outcome is a redistribution of initial endowments of the agents after which the market can be left alone to do its work. This suggests that the issues of efficiency and equity can be separated and need not involve a trade-off. The conditions for the second theorem are stronger than those for the first, as consumers' preferences and production sets now need to be convex (convexity roughly corresponds to the idea of diminishing marginal rates of substitution i.e. "the average of two equally good bundles is better than either of the two bundles"). === Existence === Even though every equilibrium is efficient, neither of the above two theorems say anything about the equilibrium existing in the first place. To guarantee that an equilibrium exists, it suffices that consumer preferences be strictly convex. With enough consumers, the convexity assumption can be relaxed both for existence and the second welfare theorem. Similarly, but less plausibly, convex feasible production sets suffice for existence; convexity excludes economies of scale. Proofs of the existence of equilibrium traditionally rely on fixed-point theorems such as Brouwer fixed-point theorem for functions (or, more generally, the Kakutani fixed-point theorem for set-valued functions). See Competitive equilibrium#Existence of a competitive equilibrium. The proof was first due to Lionel McKenzie, and Kenneth Arrow and Gérard Debreu. In fact, the converse also holds, according to Uzawa's derivation of Brouwer's fixed point theorem from Walras's law. Following Uzawa's theorem, many mathematical economists consider proving existence a deeper result than proving the two Fundamental Theorems. Another method of proof of existence, global analysis, uses Sard's lemma and the Baire category theorem; this method was pioneered by Gérard Debreu and Stephen Smale. ==== Nonconvexities in large economies ==== Starr (1969) applied the Shapley–Folkman–Starr theorem to prove that even without convex preferences there exists an approximate equilibrium. The Shapley–Folkman–Starr results bound the distance from an "approximate" economic equilibrium to an equilibrium of a "convexified" economy, when the number of agents exceeds the dimension of the goods. Following Starr's paper, the Shapley–Folkman–Starr results were "much exploited in the theoretical literature", according to Guesnerie,: 112  who wrote the following: some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, nonconvexities in preferences do not destroy the standard results of, say Debreu's theory of value. In the same way, if indivisibilities in the production sector are small with respect to the size of the economy, [ . . . ] then standard results are affected in only a minor way.: 99  To this text, Guesnerie appended the following footnote: The derivation of these results in general form has been one of the major achievements of postwar economic theory.: 138  In particular, the Shapley-Folkman-Starr results were incorporated in the theory of general economic equilibria and in the theory of market failures and of public economics. === Uniqueness === Although generally (assuming convexity) an equilibrium will exist and will be efficient, the conditions under which it will be unique are much stronger. The Sonnenschein–Mantel–Debreu theorem, proven in the 1970s, states that the aggregate excess demand function inherits only certain properties of individual's demand functions, and that these (continuity, homogeneity of degree zero, Walras' law and boundary behavior when prices are near zero) are the only real restriction one can expect from an aggregate excess demand function. Any such function can represent the excess demand of an economy populated with rational utility-maximizing individuals. There has been much research on conditions when the equilibrium will be unique, or which at least will limit the number of equilibria. One result states that under mild assumptions the number of equilibria will be finite (see regular economy) and odd (see index theorem). Furthermore, if an economy as a whole, as characterized by an aggregate excess demand function, has the revealed preference property (which is a much stronger condition than revealed preferences for a single individual) or the gross substitute property then likewise the equilibrium will be unique. All methods of establishing uniqueness can be thought of as establishing that each equilibrium has the same positive local index, in which case by the index theorem there can be but one such equilibrium. === Determinacy === Given that equilibria may not be unique, it is of some interest to ask whether any particular equilibrium is at least locally unique. If so, then comparative statics can be applied as long as the shocks to the system are not too large. As stated above, in a regular economy equilibria will be finite, hence locally unique. One reassuring result, due to Debreu, is that "most" economies are regular. Work by Michael Mandler (1999) has challenged this claim. The Arrow–Debreu–McKenzie model is neutral between models of production functions as continuously differentiable and as formed from (linear combinations of) fixed coefficient processes. Mandler accepts that, under either model of production, the initial endowments will not be consistent with a continuum of equilibria, except for a set of Lebesgue measure zero. However, endowments change with time in the model and this evolution of endowments is determined by the decisions of agents (e.g., firms) in the model. Agents in the model have an interest in equilibria being indeterminate: Indeterminacy, moreover, is not just a technical nuisance; it undermines the price-taking assumption of competitive models. Since arbitrary small manipulations of factor supplies can dramatically increase a factor's price, factor owners will not take prices to be parametric.: 17  When technology is modeled by (linear combinations) of fixed coefficient processes, optimizing agents will drive endowments to be such that a continuum of equilibria exist: The endowments where indeterminacy occurs systematically arise through time and therefore cannot be dismissed; the Arrow-Debreu-McKenzie model is thus fully subject to the dilemmas of factor price theory.: 19  Some have questioned the practical applicability of the general equilibrium approach based on the possibility of non-uniqueness of equilibria. === Stability === In a typical general equilibrium model the prices that prevail "when the dust settles" are simply those that coordinate the demands of various consumers for various goods. But this raises the question of how these prices and allocations have been arrived at, and whether any (temporary) shock to the economy will cause it to converge back to the same outcome that prevailed before the shock. This is the question of stability of the equilibrium, and it can be readily seen that it is related to the question of uniqueness. If there are multiple equilibria, then some of them will be unstable. Then, if an equilibrium is unstable and there is a shock, the economy will wind up at a different set of allocations and prices once the convergence process terminates. However, stability depends not only on the number of equilibria but also on the type of the process that guides price changes (for a specific type of price adjustment process see Walrasian auction). Consequently, some researchers have focused on plausible adjustment processes that guarantee system stability, i.e., that guarantee convergence of prices and allocations to some equilibrium. When more than one stable equilibrium exists, where one ends up will depend on where one begins. The theorems that have been mostly conclusive when related to the stability of a typical general equilibrium model are closed related to that of the most local stability. == Unresolved problems in general equilibrium == Research building on the Arrow–Debreu–McKenzie model has revealed some problems with the model. The Sonnenschein–Mantel–Debreu results show that, essentially, any restrictions on the shape of excess demand functions are stringent. Some think this implies that the Arrow–Debreu model lacks empirical content. Therefore, an unsolved problem is Is Arrow–Debreu–McKenzie equilibria stable and unique? A model organized around the tâtonnement process has been said to be a model of a centrally planned economy, not a decentralized market economy. Some research has tried to develop general equilibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process and the Fisher process. The data determining Arrow-Debreu equilibria include initial endowments of capital goods. If production and trade occur out of equilibrium, these endowments will be changed further complicating the picture. In a real economy, however, trading, as well as production and consumption, goes on out of equilibrium. It follows that, in the course of convergence to equilibrium (assuming that occurs), endowments change. In turn this changes the set of equilibria. Put more succinctly, the set of equilibria is path dependent... [This path dependence] makes the calculation of equilibria corresponding to the initial state of the system essentially irrelevant. What matters is the equilibrium that the economy will reach from given initial endowments, not the equilibrium that it would have been in, given initial endowments, had prices happened to be just right. – (Franklin Fisher). The Arrow–Debreu model in which all trade occurs in futures contracts at time zero requires a very large number of markets to exist. It is equivalent under complete markets to a sequential equilibrium concept in which spot markets for goods and assets open at each date-state event (they are not equivalent under incomplete markets); market clearing then requires that the entire sequence of prices clears all markets at all times. A generalization of the sequential market arrangement is the temporary equilibrium structure, where market clearing at a point in time is conditional on expectations of future prices which need not be market clearing ones. Although the Arrow–Debreu–McKenzie model is set out in terms of some arbitrary numéraire, the model does not encompass money. Frank Hahn, for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. One of the essential questions he introduces, often referred to as the Hahn's problem is: "Can one construct an equilibrium where money has value?" The goal is to find models in which existence of money can alter the equilibrium solutions, perhaps because the initial position of agents depends on monetary prices. Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure mathematics with no connection to actual economies. In a 1979 article, Nicholas Georgescu-Roegen complains: "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value." He cites as an example a paper that assumes more traders in existence than there are points in the set of real numbers. Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are extremely strong. As well as stringent restrictions on excess demand functions, the necessary assumptions include perfect rationality of individuals; complete information about all prices both now and in the future; and the conditions necessary for perfect competition. However, some results from experimental economics suggest that even in circumstances where there are few, imperfectly informed agents, the resulting prices and allocations may wind up resembling those of a perfectly competitive market (although certainly not a stable general equilibrium in all markets). Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient. == Computing general equilibrium == Until the 1970s general equilibrium analysis remained theoretical. With advances in computing power and the development of input–output tables, it became possible to model national economies, or even the world economy, and attempts were made to solve for general equilibrium prices and quantities empirically. Applied general equilibrium (AGE) models were pioneered by Herbert Scarf in 1967, and offered a method for solving the Arrow–Debreu General Equilibrium system in a numerical fashion. This was first implemented by John Shoven and John Whalley (students of Scarf at Yale) in 1972 and 1973, and were a popular method up through the 1970s. In the 1980s however, AGE models faded from popularity due to their inability to provide a precise solution and its high cost of computation. Computable general equilibrium (CGE) models surpassed and replaced AGE models in the mid-1980s, as the CGE model was able to provide relatively quick and large computable models for a whole economy, and was the preferred method of governments and the World Bank. CGE models are heavily used today, and while 'AGE' and 'CGE' is used inter-changeably in the literature, Scarf-type AGE models have not been constructed since the mid-1980s, and the CGE literature at current is not based on Arrow-Debreu and General Equilibrium Theory as discussed in this article. CGE models, and what is today referred to as AGE models, are based on static, simultaneously solved, macro balancing equations (from the standard Keynesian macro model), giving a precise and explicitly computable result. == Other schools == General equilibrium theory is a central point of contention and influence between the neoclassical school and other schools of economic thought, and different schools have varied views on general equilibrium theory. Some, such as the Keynesian and Post-Keynesian schools, strongly reject general equilibrium theory as "misleading" and "useless". Disequilibrium macroeconomics and different non-equilibrium approaches were developed as alternatives. Other schools, such as new classical macroeconomics, developed from general equilibrium theory. === Keynesian and Post-Keynesian === Keynesian and Post-Keynesian economists, and their underconsumptionist predecessors criticize general equilibrium theory specifically, and as part of criticisms of neoclassical economics generally. Specifically, they argue that general equilibrium theory is neither accurate nor useful, that economies are not in equilibrium, that equilibrium may be slow and painful to achieve, and that modeling by equilibrium is "misleading", and that the resulting theory is not a useful guide, particularly for understanding of economic crises. Let us beware of this dangerous theory of equilibrium which is supposed to be automatically established. A certain kind of equilibrium, it is true, is reestablished in the long run, but it is after a frightful amount of suffering. The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again. It is as absurd to assume that, for any long period of time, the variables in the economic organization, or any part of them, will "stay put," in perfect equilibrium, as to assume that the Atlantic Ocean can ever be without a wave. Robert Clower and others have argued for a reformulation of theory toward disequilibrium analysis to incorporate how monetary exchange fundamentally alters the representation of an economy as though a barter system. === New classical macroeconomics === While general equilibrium theory and neoclassical economics generally were originally microeconomic theories, new classical macroeconomics builds a macroeconomic theory on these bases. In new classical models, the macroeconomy is assumed to be at its unique equilibrium, with full employment and potential output, and that this equilibrium is assumed to always have been achieved via price and wage adjustment (market clearing). The best-known such model is real business-cycle theory, in which business cycles are considered to be largely due to changes in the real economy, unemployment is not due to the failure of the market to achieve potential output, but due to equilibrium potential output having fallen and equilibrium unemployment having risen. === Socialist economics === Within socialist economics, a sustained critique of general equilibrium theory (and neoclassical economics generally) is given in Anti-Equilibrium, based on the experiences of János Kornai with the failures of Communist central planning, although Michael Albert and Robin Hahnel later based their Parecon model on the same theory. === New structural economics === The structural equilibrium model is a matrix-form computable general equilibrium model in new structural economics. This model is an extension of the John von Neumann's general equilibrium model (see Computable general equilibrium for details). Its computation can be performed using the R package GE. The structural equilibrium model can be used for intertemporal equilibrium analysis, where time is treated as a label that differentiates between types of commodities and firms, meaning commodities are distinguished by when they are delivered and firms are distinguished by when they produce. The model can include factors such as taxes, money, endogenous production functions, and endogenous institutions, etc. The structural equilibrium model can include excess tax burdens, meaning that the equilibrium in the model may not be Pareto optimal. When production functions and/or economic institutions are treated as endogenous variables, the general equilibrium is referred to as structural equilibrium. == See also == General equilibrium theorists (category) Cobweb model Decision theory Game theory Mechanism design == Notes == == Further reading == Black, Fischer (1995). Exploring General Equilibrium. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-02382-5. Dixon, Peter B.; Parmenter, Brian R.; Powell, Alan A.; Wilcoxen, Peter J.; Pearson, Ken R. (1992). Notes and Problems in Applied General Equilibrium Economics. North-Holland. ISBN 978-0-444-88449-7. ——, with Rimmer, Maureen T. (2002). Dynamic General Equilibrium Modelling for Forecasting and Policy: A Practical Guide and Documentation of MONASH. Contributions to economic analysis (256). Amsterdam: Elsevier. ISBN 0444512608. ——, with Jorgenson, Dale W., eds. (2012). Handbook of Computable General Equilibrium Modelling (2 volumes). North-Holland. ——, with Jerie, Michael; Rimmer, Maureen T. (2018). Trade Theory in Computable General Equilibrium Models. Advances in Applied General Equilibrium Modeling. Singapore: Springer Nature. ISBN 978-981-10-8323-5. Eaton, B. Curtis; Eaton, Diane F.; Allen, Douglas W. (2009). "Competitive General Equilibrium". Microeconomics: Theory with Applications (Seventh ed.). Toronto: Pearson Prentice Hall. ISBN 978-0-13-206424-8. Geanakoplos, John (1987). "Arrow-Debreu model of general equilibrium". The New Palgrave: A Dictionary of Economics. Vol. 1. pp. 116–124. Grandmont, J. M. (1977). "Temporary General Equilibrium Theory". Econometrica. 45 (3): 535–572. doi:10.2307/1911674. JSTOR 1911674. Hicks, John R. (1946) [1939]. Value and Capital (Second ed.). Oxford: Clarendon Press. Kubler, Felix (2008). "Computation of general equilibria (new developments)". The New Palgrave Dictionary of Economics (Second ed.). Mas-Colell, A.; Whinston, M.; Green, J. (1995). Microeconomic Theory. New York: Oxford University Press. ISBN 978-0-19-507340-9. McKenzie, Lionel W. (1981). "The Classical Theorem on Existence of Competitive Equilibrium" (PDF). Econometrica. 49 (4): 819–841. doi:10.2307/1912505. JSTOR 1912505. Archived (PDF) from the original on 2012-03-15. _____ (1983). "Turnpike Theory, Discounted Utility, and the von Neumann Facet". Journal of Economic Theory. 30 (2): 330–352. doi:10.1016/0022-0531(83)90111-4. _____ (1987). "Turnpike theory". The New Palgrave: A Dictionary of Economics. Vol. 4. pp. 712–720. _____ (1999). "Equilibrium, Trade, and Capital Accumulation". The Japanese Economic Review. 50 (4): 371–397. doi:10.1111/1468-5876.00128. Samuelson, Paul A. (1941). "The Stability of Equilibrium: Comparative Statics and Dynamics". Econometrica. 9 (2): 97–120. doi:10.2307/1906872. JSTOR 1906872. _____ (1983) [1947]. Foundations of Economic Analysis (Enlarged ed.). Harvard University Press. ISBN 978-0-674-31301-9. Scarf, Herbert E. (2008). "Computation of general equilibria". The New Palgrave Dictionary of Economics (2nd ed.). Schumpeter, Joseph A. (1954). History of Economic Analysis. Oxford University Press.
Wikipedia/General_equilibrium_theory
Behavioural science is the branch of science concerned with human behaviour. While the term can technically be applied to the study of behaviour amongst all living organisms, it is nearly always used with reference to humans as the primary target of investigation (though animals may be studied in some instances, e.g. invasive techniques). The behavioural sciences sit in between the conventional natural sciences and social studies in terms of scientific rigor. It encompasses fields such as psychology, neuroscience, linguistics, and economics. == Scope == The behavioural sciences encompass both natural and social scientific disciplines, including various branches of psychology, neuroscience and biobehavioural sciences, behavioural economics and certain branches of criminology, sociology and political science. This interdisciplinary nature allows behavioural scientists to coordinate findings from psychological experiments, genetics and neuroimaging, self-report studies, interspecies and cross-cultural comparisons, and correlational and longitudinal designs to understand the nature, frequency, mechanisms, causes and consequences of given behaviours. With respect to the applied behavioural science and behavioural insights, the focus is usually narrower, tending to encompass cognitive psychology, social psychology and behavioural economics generally, and invoking other more specific fields (e.g. health psychology) where needed. In applied settings behavioural scientists exploit their knowledge of cognitive biases, heuristics, and peculiarities of how decision-making is affected by various factors to develop behaviour change interventions or develop policies which 'nudge' people to acting more auspiciously (see Applications below). === Future and emerging techniques === Robila explains how using modern technology to study and understand behavioral patterns on a greater scale, such as artificial intelligence, machine learning, and greater data has a future in brightening up behavioral science assistance/ research. Creating cutting-edge therapies and interventions with immersive technology like virtual reality/ AI would also be beneficial to behavioral science future(s). These concepts are only a hint of the many paths behavioral science may take in the future. == Applications == Insights from several pure disciplines across behavioural sciences are explored by various applied disciplines and practiced in the context of everyday life and business. Consumer behaviour, for instance, is the study of the decision making process consumers make when purchasing goods or services. It studies the way consumers recognise problems and discover solutions. Behavioural science is applied in this study by examining the patterns consumers make when making purchases, the factors that influenced those decisions, and how to take advantage of these patterns. Organisational behaviour is the application of behavioural science in a business setting. It studies what motivates employees, how to make them work more effectively, what influences this behaviour, and how to use these patterns in order to achieve the company's goals. Managers often use organisational behaviour to better lead their employees. Using insights from psychology and economics, behavioural science can be leveraged to understand how individuals make decisions regarding their health and ultimately reduce disease burden through interventions such as loss aversion, framing, defaults, nudges, and more. Other applied disciplines of behavioural science include operations research and media psychology. == Differentiation from social sciences == The terms behavioural sciences and social sciences are interconnected fields that both study systematic processes of behaviour, but they differ on their level of scientific analysis for various dimensions of behaviour. Behavioural sciences abstract empirical data to investigate the decision process and communication strategies within and between organisms in a social system. This characteristically involves fields like psychology, social neuroscience, ethology, and cognitive science. In contrast, social sciences provide a perceptive framework to study the processes of a social system through impacts of a social organisation on the structural adjustment of the individual and of groups. They typically include fields like sociology, economics, public health, anthropology, demography, and political science. Many subfields of these disciplines test the boundaries between behavioural and social sciences. For example, political psychology and behavioural economics use behavioural approaches, despite the predominant focus on systemic and institutional factors in the broader fields of political science and economics. == See also == Behaviour Human behaviour loss aversion List of academic disciplines Science Fields of science Natural sciences Social sciences History of science History of technology == References == == Selected bibliography == George Devereux: From anxiety to method in the behavioral sciences, The Hague, Paris. Mouton & Co, 1967 Fred N. Kerlinger (1979). Behavioural Research: A Conceptual Approach. New York: Holt, Rinehart & Winston. ISBN 0-03-013331-9. E.D. Klemke, R. Hollinger & A.D. Kline, (eds.) (1980). Introductory Readings in the Philosophy of Science. Prometheus Books, New York. Neil J. Smelser & Paul B. Baltes, eds. (2001). International Encyclopedia of the Social & Behavioral Sciences, 26 v. Oxford: Elsevier. ISBN 978-0-08-043076-8 Mills, J. A. (1998). Control a history of behavioral psychology. New York University Press. == External links == Media related to Behavioral sciences at Wikimedia Commons
Wikipedia/Behavioral_science
Forced displacement (also forced migration or forced relocation) is an involuntary or coerced movement of a person or people away from their home or home region. The UNHCR defines 'forced displacement' as follows: displaced "as a result of persecution, conflict, generalized violence or human rights violations". A forcibly displaced person may also be referred to as a "forced migrant", a "displaced person" (DP), or, if displaced within the home country, an "internally displaced person" (IDP). While some displaced persons may be considered refugees, the latter term specifically refers to such displaced persons who are receiving legally-defined protection and are recognized as such by their country of residence and/or international organizations. Forced displacement has gained attention in international discussions and policy making since the European migrant crisis. This has since resulted in a greater consideration of the impacts of forced migration on affected regions outside Europe. Various international, regional, and local organizations are developing and implementing approaches to both prevent and mitigate the impact of forced migration in the home regions as well as the receiving or destination regions. Additionally, some collaboration efforts are made to gather evidence in order to seek prosecution of those involved in causing events of human-made forced migration. An estimated 100 million people around the world were forcibly displaced by the end of 2022, with the majority coming from the Global South. == Definitions == Governments, NGOs, other international organizations and social scientists have defined forced displacement in a variety of ways. They have generally agreed that it is the forced removal or relocation of a person from their environment and associated connections. It can involve different types of movements, such as flight (from fleeing), evacuation, and population transfer. The International Organization for Migration defines a forced migrant as any person migrating to "escape persecution, conflict, repression, natural and human-made disasters, ecological degradation, or other situations that endanger their lives, freedom or livelihood". According to UNESCO, forced displacement is "the forced movement of people from their locality or environment and occupational activities," with its leading cause being armed conflict. According to researcher Alden Speare, even movement under immediate threat to life contains a voluntary element as long as an option exists going into hiding, or attempting to avoid persecution. According to him "migration can be considered to be involuntary only when a person is physically transported from a country and has no opportunity to escape from those transporting him [or her]." This viewpoint has come under scrutiny when considering direct and indirect factors which may leave migrants with little to no choice in their decisions, such as imminent threats to life and livelihood. === Distinctions between the different concepts === A migrant who fled their home because of economic hardship is an economic migrant, and strictly speaking, not a displaced person. If the displaced person was forced out of their home because of economically driven projects, such as the Three Gorges Dam in China, the situation is referred to as development-induced displacement. A displaced person who left their home region because of political persecution or violence, but did not cross an international border, commonly falls into the looser category of internally displaced person (IDP), subject to more tenuous international protection. In 1998, the UN Commission on Human Rights published the Guiding Principles on Internal Displacement, defining internally displaced people as: "persons or groups of persons who have been forced or obliged to flee or leave their homes or places of habitual residence in particular as a result of or in order to avoid the effects of armed conflict, situations of generalized violence, violations of human rights, or natural or human-made disasters and who have not crossed an internationally recognized State border." If the displaced person has crossed an international border and falls under one of the relevant international legal instruments, they may be able to apply for asylum and can become a refugee if the application is successful. Although often incorrectly used as a synonym for displaced person, the term refugee refers specifically to a legally-recognized status that has access to specific legal protections. Loose application of the term refugee may cause confusion between the general descriptive class of displaced persons and those who can legally be defined as refugees. Some forced migrants may, due to the country of residence's legal system, be unable to apply for asylum in that country. Thus, even though they meet the international law definition of a refugee they are unable to claim asylum and become recognised by their host country as refugees. A displaced person crossing an international border without permission from the country they are entering or without subsequently applying for asylum may be considered an illegal immigrant. Forced migrants are always either IDPs or displaced people, as both of these terms do not require a legal framework and the fact that they left their homes is sufficient. The distinction between the terms displaced person and forced migrant is minor; however, the term displaced person has an important historic context (e.g. World War II). ==== History of the term displaced person ==== The term displaced person (DP) was first widely used during World War II, following the subsequent refugee outflows from Eastern Europe. In this context, DP specifically referred to an individual removed from their native country as a refugee, prisoner or a slave laborer. Most war victims, political refugees, and DPs of the immediate post-Second World War period were Ukrainians, Poles, other Slavs, and citizens of the Baltic states (Lithuanians, Latvians, and Estonians) who refused to return to Soviet-dominated Eastern Europe. A. J. Jaffe claimed that the term was originally coined by Eugene M. Kulischer. The meaning has significantly broadened in the past half-century. == Causes and examples == Bogumil Terminski distinguishes two general categories of displacement: Displacement of risk: mostly conflict-induced displacement, deportations and disaster-induced displacement. Displacement of adaptation: associated with voluntary migration, development-induced displacement and environmentally-induced displacement. === Natural causes === Forced displacement may directly result from natural disasters and indirectly from the subsequent impact on infrastructure, food and water access, and local/regional economies. Displacement may be temporary or permanent, depending on the scope of the disaster and the area's recovery capabilities. Climate change is increasing the frequency of major natural disasters, possibly placing a greater number of populations in situations of forced displacement. Also crop failures due to blight and/or pests fall within this category by affecting people's access to food. Additionally, the term environmental refugee represents people who are forced to leave their traditional habitat because of environmental factors which negatively impact their livelihood, or even environmental disruption i.e. biological, physical or chemical change in ecosystem. Migration can also occur as a result of slow-onset climate change, such as desertification or sea-level rise, of deforestation or land degradation. ==== Examples of forced displacement caused by natural disasters ==== 2024 LA Fires: LA Fires displacing approximately 200,000 people. 2004 Indian Ocean tsunami: Resulting from a 9.1 earthquake off the coast of North Sumatra, the Indian Ocean Tsunami claimed over 227,898 lives, heavily damaging coastlines throughout the Indian Ocean. As a result, over 1.7 million people were displaced, mostly from Indonesia, Sri Lanka, and India. Hurricane Katrina (2005): Striking New Orleans, Louisiana, in late August 2005, Hurricane Katrina inflicted approximately US$125 billion in damages, standing as one of the costliest storms in United States history. As a result of the damage inflicted by Katrina, over one million people were internally displaced. One month after the disaster, over 600,000 remained displaced. Immediately following the disaster, New Orleans lost approximately half of its population, with many residents displaced to cities such as Houston, Dallas, Baton Rouge, and Atlanta. According to numerous studies, displacement disproportionally impacted Louisiana's poorer populations, specifically African Americans. 2011 East Africa drought: Failed rains in Somalia, Kenya, and Ethiopia led to high livestock and crop losses, driving majority pastoralist populations to surrounding areas in search of accessible food and water. In addition to seeking food and water, local populations' migration was motivated by an inability to maintain traditional lifestyles. According to researchers, although partly influenced by local armed conflict, the East African drought stands as an example of climate change impacts. === Human-made causes === Human-made displacement describes forced displacement caused by political entities, criminal organizations, conflicts, human-made environmental disasters, development, etc. Although impacts of natural disasters and blights/pests may be exacerbated by human mismanagement, human-made causes refer specifically to those initiated by humans. According to UNESCO, armed conflict stands as the most common cause behind forced displacement, reinforced by regional studies citing political and armed conflict as the largest attributing factors to migrant outflows from Latin America, Africa, and Asia. ==== Examples of forced displacement caused by criminal activity ==== Displacement in Mexico due to cartel violence: Throughout Mexico, drug cartel, paramilitary, and self-defense group violence drives internal and external displacement. According to a comprehensive, mixed methodology study by Salazar and Álvarez Lobato, families fled their homes as a means of survival, hoping to escape homicide, extortion, and potential kidnapping. Using a collection of available data and existing studies, the total number of displaced persons between 2006 and 2012 was approximately 740 thousands. Displacement in Central America due to cartel/gang violence: A major factor behind US immigrant crises in the early 21st century (such as the 2014 immigrant crisis), rampant gang violence in the Northern Triangle, combined with corruption and low economic opportunities, has forced many to flee their country in pursuit of stability and greater opportunity. Homicide rates in countries such as El Salvador and Honduras reached some of the highest in the world, with El Salvador peaking at 103 homicides per 100,000 people. Contributing factors include extortion, territorial disputes, and forced gang recruitment, resulting in some estimates of approximately 500,000 people displaced annually. Displacement in Colombia due to conflict and drug-related violence: According to researchers Mojica and Eugenia, Medellín, Colombia around 2013 exemplified crime and violence-induced forced displacement, standing as one of the most popular destinations for IDPs while also producing IDPs of its own. Rural citizens fled from organized criminal violence, with the majority pointing to direct threats as the main driving force, settling in Medellín in pursuit of safety and greater opportunity. Within Medellín, various armed groups battled for territorial control, forcing perceived opponents from their homes and pressuring residents to abandon their livelihoods, among other methods. All in all, criminal violence forced Colombians to abandon their possessions, way of life, and social ties in pursuit of safety. ==== Examples of forced displacement caused by political conflict ==== 1949–1956 Palestinian exodus 1950-1951 exodus of Turks from Bulgaria: according to some, caused because the Turkish support of the USA during the Korean War. Communist ideologies, Islamophobia and Anti-Turkism also played a role. Jewish exodus from the Muslim world Vietnam War: Throughout the Vietnam War and in the years preceding it, many populations were forced out of Vietnam and the surrounding countries as a result of armed conflict and/or persecution by their governments, such as the Socialist Republic of Vietnam. This event is referred to as the Indochina refugee crisis, with millions displaced across Asia, Australia, Europe, and North America. Salvadoran Civil War: Throughout and after the 12-year conflict between the Salvadoran government and the FMLN, Salvadorans faced forced displacement as a result of combat, persecution, and deteriorating quality of life/access to socioeconomic opportunities. Overall, one in four Salvadorans were internally and externally displaced (over one million people). 2021 Myanmar coup d'état: Since the coup d’état of 1 February 2021, the Burmese military's ascendancy into power has resulted in widespread chaos and violence, aggravated by the refusal of large sections of the public to accept a military regime given the country's experiences during the second half of the 20th and early years of the 21st century. As a result, many in the public sector have initiated strikes, and the country has seen elevated levels of forced displacement, both internally displaced persons (IDPs) (208,000 since 1 February 2021) and refugees fleeing abroad (an estimated 22,000 since 1 February 2021). The particular political conflict causing the displacement has been flagged as symptomatic of that of a state on the brink of collapse. Two key indicators of this that have been highlighted are firstly, that levels of security have been severely reduced to the point where citizens are no longer protected from violence by the state; and secondly, goods and services are not being reliably supplied to citizens either by the ousted government or by the new military leadership, primarily as a result of the instability created and the strikes triggered. These internal problems are further reflected by the withdrawal of international recognition by both governmental and non-governmental bodies. Gaza Strip evacuations, since the start of the Israeli invasion of the Gaza Strip on 27 October 2023, over 85% of the population has been displaced. ==== Examples of forced displacement caused by human-made environmental disasters ==== 2019 Amazon rainforest wildfires: Although human-made fires are a normal part of Amazonian agriculture, the 2019 dry season saw an internationally noted increase in their rate of occurrence. The rapidly spreading fires, combined with efforts from agricultural and logging companies, has forced Brazil's indigenous populations off their native lands. Chernobyl disaster: A nuclear meltdown on April 26, 1986, near Pripyat, Ukraine contaminated the city and surrounding areas with harmful levels of radiation, forcing the displacement of over 100,000 people. Great Famine of Ireland: Between 1845 and 1849, potato blight, exacerbated by policy decisions and mismanagement by the U.K. government, caused millions of Irish people, largely potato-dependent tenant farmers, to starve or eventually flee the country. Over one million perished from subsequent famine and disease, and another million fled the country, reducing the overall Irish population by at least a quarter. ==== Other human-made displacement ==== Human trafficking/smuggling: Migrants displaced through deception or coercion with purpose of their exploitation fall under this category. Due to its clandestine nature, the data on such type of forced migration are limited. A disparity also exists between the data for male trafficking (such as for labor in agriculture, construction etc.) and female trafficking (such as for sex work or domestic service), with more data available for males. The International Labour Organization considers trafficking an offense against labor protection, denying companies from leveraging migrants as a labor resource. ILO's Multilateral Framework includes principle no. 11, recommending that "Governments should formulate and implement, in consultation with the social partners, measures to prevent abusive practices, migrant smuggling and people trafficking; they should also work towards preventing irregular labor migration." Slavery: Historically, slavery has led to the displacement of individuals for forced labor, with the Middle Passage of the 15th through 19th century Atlantic slave trade standing as a notable example. Of the 20 million Africans captured for the trade, half died in their forced march to the African coast, and another ten to twenty percent died on slave ships carrying them from Africa to the Americas. Ethnic cleansing: The systematic removal of ethnic or religious groups from a territory with the intent of making it ethnically homogeneous. Examples include the Catholic removal of Salzburg Protestants, the removal of Jewish people during the Holocaust, and the deportation of North American indigenous peoples (e.g., Trail of Tears). Suppressing political opposition: For example, the forced settlements in the Soviet Union and population transfer in the Soviet Union including deportation of the Crimean Tatars, deportation of the Chechens and Ingush, deportation of Koreans in the Soviet Union, deportation of the Soviet Greeks, and deportations of the Ingrian Finns Aligning ethnic composition with artificial political border: For example flight and expulsion of Germans (1944–1950), Polish population transfers (1944–1946), and Operation Vistula Colonization: For example, the British governments transportation of Convicts in Australia, American Colonization Society and others' attempt to create a country for African Americans in Africa as Liberia, Japanese settlers in Manchukuo following Japanese invasion of Manchuria, and the Chinese military settlement of Xinjiang Production and Construction Corps in Xinjiang. == Conditions faced by displaced persons == Displaced persons face adverse conditions when taking the decision to leave, traveling to a destination, and sometimes upon reaching their destination. Displaced persons are often forced to place their lives at risk, travel in inhumane conditions, and may be exposed to exploitation and abuse. These risk factors may increase through the involvement of smugglers and human traffickers, who may exploit them for illegal activities such as drug/weapons trafficking, forced labor, or sex work. The states where migrants seek protection may consider them a threat to national security. Displaced persons may also seek the assistance of human smugglers (such as coyotes in Latin America) throughout their journey. Given the illegal nature of smuggling, smugglers may take use dangerous methods to reach their destination without capture, exposing displaced persons to harm and sometimes resulting in deaths. Examples include abandonment, exposure to exploitation, dangerous transportation conditions, and death from exposure to harsh environments. In most instances of forced migration across borders, migrants do not possess the required documentation for legal travel. The states where migrants seek protection may consider them a threat to national security. As a result, displaced persons may face detainment and criminal punishment, as well as physical and psychological trauma. Various studies focusing on migrant health have specifically linked migration to increased likelihood of depression, anxiety, and other psychological troubles. For example, the United States has faced criticism for its recent policies regarding migrant detention, specifically the detention of children. Critics point to poor detention conditions, unstable contact with parents, and high potential for long-term trauma as reasons for seeking policy changes. Displaced persons risk greater poverty than before displacement, financial vulnerability, and potential social disintegration, in addition to other risks related to human rights, culture, and quality of life. Forced displacement has varying impacts, dependent on the means through which one was forcibly displaced, their geographic location, their protected status, and their ability to personally recover. Under the most common form of displacement, armed conflict, individuals often lose possession of their assets upon fleeing and possible upon arrival to a new country, where they can also face cultural, social, and economic discontinuity. == Responses to forced displacement == === International response === Responses to situations of forced displacement vary across regional and international levels, with each type of forced displacement demonstrating unique characteristics and the need for a considerate approach. At the international level, international organizations (e.g. the UNHCR), NGOs (Doctors without Borders), and country governments (USAID) may work towards directly or indirectly ameliorating these situations. Means may include establishing internationally recognized protections, providing clinics to migrant camps, and supplying resources to populations. According to researchers such as Francis Deng, as well as international organizations such as the UN, an increase in IDPs compounds the difficulty of international responses, posing issues of incomplete information and questions regarding state sovereignty. State sovereignty especially becomes of concern when discussing protections for IDPs, who are within the borders of a sovereign state, placing reluctance in the international community's ability to respond. Multiple landmark conventions aim at providing rights and protections to the different categories of forcibly displaced persons, including the 1951 Refugee Convention, the 1967 Protocol, the Kampala Convention, and the 1998 Guiding Principles. Despite internationally cooperation, these frameworks rely on the international system, which states may disregard. In a 2012 study, Young Hoon Song found that nations "very selectively" responded to instances of forced migration and internally displaced persons. World organizations such as the United Nations and the World Bank, as well as individual countries, sometimes directly respond to the challenges faced by displaced people, providing humanitarian assistance or forcibly intervening in the country of conflict. Disputes related to these organizations' neutrality and limited resources has affected the capabilities of international humanitarian action to mitigate mass displacement mass displacement's causes. These broad forms of assistance sometimes do not fully address the multidimensional needs of displaced persons. Regardless, calls for multilateral responses echo across organizations in the face of falling international cooperation. These organizations propose more comprehensive approaches, calling for improved conflict resolution and capacity-building in order to reduce instances of forced displacement. === Local response === Responses at multiple levels and across sectors is vital. A research has for instance highlighted the importance of collaboration between businesses and non-governmental organizations to tackle resettlement and employment issues. Lived in experiences of displaced persons will vary according to the state and local policies of their country of relocation. Policies reflecting national exclusion of displaced persons may be undone by inclusive urban policies. Sanctuary cities are an example of spaces that regulate their cooperation or participation with immigration law enforcement. The practice of urban membership upon residence allows displaced persons to have access to city services and benefits, regardless of their legal status. Sanctuary cities have been able to provide migrants with greater mobility and participation in activities limiting the collection of personal information, issuing identification cards to all residents, and providing access to crucial services such as health care. Access to these services can ease the hardships of displaced people by allowing them to healthily adjust to life after displacement . === Criminal prosecution === Forced displacement has been the subject of several trials in local and international courts. For an offense to classify as a war crime, the civilian victim must be a "protected person" under international humanitarian law. Originally referring only categories of individuals explicitly protected under one of the four Geneva Conventions of 1949, "protected person" now refers to any category of individuals entitled to protection under specific law of war treaties. In Article 49, the Fourth Geneva Convention, adopted on 12 August 1949, specifically forbade forced displacementIndividual or mass forcible transfers, as well as deportations of protected persons from occupied territory to the territory of the Occupying Power or to that of any other country, occupied or not, are prohibited, regardless of their motive.The Rome Statute of the International Criminal Court defines forced displacement as a crime within the jurisdiction of the court:"Deportation or forcible transfer of population" means forced displacement of the people concerned by expulsion or other coercive acts from the area in which they are lawfully present, without grounds permitted under international law. Following the end of World War II, the Krupp trial was held with a specific charge to the forced displacement of enemy civilian populations for the purpose of forced labor. The US Military Tribunal concluded that "[t]here is no international law that permits the deportation or the use of civilians against their will for other than on reasonable requisitions for the need of the army, either within the area of the army or after deportation to rear areas or to the homeland of the occupying power". At the Nuremberg trials, Hans Frank, chief jurist in occupied Poland, was found guilty, among others for forced displacement of the civilian population. Several people were tried and convicted by the International Criminal Tribunal for the former Yugoslavia (ICTY) for connection to forced displacement during the Yugoslav Wars in the 1990s. On 11 April 2018, the Appeals Chamber sentenced Vojislav Šešelj 10 years in prison under Counts 1, 10, and 11 of the indictment for instigating deportation, persecution (forcible displacement), and other inhumane acts (forcible transfer) as crimes against humanity due to his speech in Hrtkovci on 6 May 1992, in which he called for the expulsion of Croats from Vojvodina. Other convictions for forced displacement included ex-Bosnian Serb politician Momčilo Krajišnik, ex-Croatian Serb leader Milan Martić, former Bosnian Croat paramilitary commander Mladen Naletilić, and Bosnian Serb politician Radoslav Brđanin. On 17 March 2023, the International Criminal Court (ICC) issued arrest warrants for Vladimir Putin and Russia's Commissioner for Children's Rights Maria Lvova-Belova for war crimes of deportation and illegal transfer of children from occupied Ukraine to Russia. == See also == Asylum seekers Climate migrant Development-induced displacement Deportation Displaced persons camps in post-World War II Europe Divided family Ethnic cleansing Forced displacement in popular culture International Association for the Study of Forced Migration Kampala Convention Population cleansing Population transfer Refugees Refugee employment Refugee women Refugee roulette List of areas depopulated due to climate change List of diasporas == References == == Further reading == Betts, Alexander: Forced Migration and Global Politics. Wiley-Blackwell. James, Paul (2014). "Faces of Globalization and the Borders of States: From Asylum Seekers to Citizens". Citizenship Studies. 18 (2): 208–23. doi:10.1080/13621025.2014.886440. S2CID 144816686. Luciuk, Lubomyr Y.: "Ukrainian Displaced Persons, Canada, and the Migration of Memory," University of Toronto Press, 2000. Migration of people from Mirpur(AJK) for construction of Mangla Dam Sundhaussen, Holm (2012). Forced Ethnic Migration. Retrieved June 13, 2012. {{cite book}}: |work= ignored (help) == External links == International Network on Displacement and Resettlement Pictures of Refugees in Europe – Features by Jean-Michel Clajot, Belgian photographer Oukloof forced removals in the Western Cape of South Africa – A community web site documenting the known history of the forced removal of the residents of Oukloof in the 1960s Forced Migration Online provides access to a diverse range of relevant information resources on forced migration, including a searchable digital library consisting of full-text documents. Back issues of migration journals (Disasters, Forced Migration Review, International Journal of Refugee Law, International Migration Review and Journal of Refugee Studies) Eurasylum Many relevant documents on asylum and refugee policy, immigration and human trafficking/smuggling internationally IDP Voices Forced migrants tell their life stories Internal Displacement Monitoring Centre (IDMC), Norwegian Refugee Council The leading international body monitoring conflict-induced internal displacement worldwide. The International Association for the Study of Forced Migration brings together academics, practitioners and decision-makers working on forced migration issues. The International Organization for Migration is a non-governmental organization with a major role mediating modern migration. The Journal of Refugee Studies from Oxford University provides a forum for exploration of the complex problems of forced migration and national, regional and international responses. Program for the Study of Global Migration, Graduate Institute of International and Development Studies, Geneva. The Refugee Studies Centre, University of Oxford: a leading multidisciplinary centre for research and teaching on the causes and consequences of forced migration. What is Forced Migration?, an introductory guide for those who are new to the subject. Wits Forced Migration Studies Programme, Africa's leading centre for teaching and research on displacement, migration, and social transformation. Women's Commission for Refugee Women and Children
Wikipedia/Forced_displacement
A strategic move in game theory is an action taken by a player outside the defined actions of the game in order to gain a strategic advantage and increase one's payoff. Strategic moves can either be unconditional moves or response rules. The key characteristics of a strategic move are that it involves a commitment from the player, meaning the player can only restrict their own choices and that the commitment has to be credible, meaning that once employed it must be in the interest of the player to follow through with the move. Credible moves should also be observable to the other players. Strategic moves are not warnings or assurances. Warnings and assurances are merely statements of a player's interest, rather than an actual commitment from the player. The term was coined by Thomas Schelling in his 1960 book, The Strategy of Conflict, and has gained wide currency in political science and industrial organization. == References == Thomas Schelling: The Strategy of Conflict, Harvard University press (1960). ISBN 0-674-84031-3 Avinash Dixit & Barry Nalebuff: Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life, W.W. Norton (1991) ISBN 0-393-31035-3
Wikipedia/Strategic_move
Military strategy is a set of ideas implemented by military organizations to pursue desired strategic goals. Derived from the Greek word strategos, the term strategy, when first used during the 18th century, was seen in its narrow sense as the "art of the general", or "the art of arrangement" of troops. and deals with the planning and conduct of campaigns. The father of Western modern strategic studies, Carl von Clausewitz (1780–1831), defined military strategy as "the employment of battles to gain the end of war." B. H. Liddell Hart's definition put less emphasis on battles, defining strategy as "the art of distributing and applying military means to fulfill the ends of policy". Hence, both gave the preeminence to political aims over military goals. Sun Tzu (544–496 BC) is often considered as the father of Eastern military strategy and greatly influenced Chinese, Japanese, Korean and Vietnamese historical and modern war tactics. The Art of War by Sun Tzu grew in popularity and saw practical use in Western society as well. It continues to influence many competitive endeavors in Asia, Europe, and America including culture, politics, and business, as well as modern warfare. The Eastern military strategy differs from the Western by focusing more on asymmetric warfare and deception. Chanakya's Arthashastra has been an important strategic and political compendium in Indian and Asian history as well. == Fundamentals == Military strategy is the planning and execution of the contest between groups of armed adversaries. It is a subdiscipline of warfare and of foreign policy, and a principal tool to secure national interests. Its perspective is larger than military tactics, which involve the disposition and maneuver of units on a particular sea or battlefield, but less broad than grand strategy (or "national strategy"), which is the overarching strategy of the largest of organizations such as the nation state, confederation, or international alliance and involves using diplomatic, informational, military and economic resources. Military strategy involves using military resources such as people, equipment, and information against the opponent's resources to gain supremacy or reduce the opponent's will to fight, developed through the precepts of military science. NATO's definition of strategy is "presenting the manner in which military power should be developed and applied to achieve national objectives or those of a group of nations." Field Marshal Viscount Alanbrooke, Chief of the Imperial General Staff and co-chairman of the Anglo-US Combined Chiefs of Staff Committee for most of the Second World War, described the art of military strategy as: "to derive from the [policy] aim a series of military objectives to be achieved: to assess these objectives as to the military requirements they create, and the preconditions which the achievement of each is likely to necessitate: to measure available and potential resources against the requirements and to chart from this process a coherent pattern of priorities and a rational course of action." Field-Marshal Montgomery summed it up thus "Strategy is the art of distributing and applying military means, such as armed forces and supplies, to fulfill the ends of policy. Tactics means the dispositions for, and control of, military forces and techniques in actual fighting. Put more shortly: strategy is the art of the conduct of war, tactics the art of fighting." === Background === Military strategy in the 19th century was still viewed as one of a trivium of "arts" or "sciences" that govern the conduct of warfare; the others being tactics, the execution of plans and maneuvering of forces in battle, and logistics, the maintenance of an army. The view had prevailed since the Roman times, and the borderline between strategy and tactics at this time was blurred, and sometimes categorization of a decision is a matter of almost personal opinion. Carnot, during the French Revolutionary Wars thought it simply involved concentration of troops. As French statesman Georges Clemenceau said, "War is too important a business to be left to soldiers." This gave rise to the concept of the grand strategy which encompasses the management of the resources of an entire nation in the conduct of warfare. On this issue Clausewitz stated that a successful military strategy may be a means to an end, but it is not an end in itself. == Principles == Many military strategists have attempted to encapsulate a successful strategy in a set of principles. Sun Tzu defined 13 principles in his The Art of War while Napoleon listed 115 maxims. American Civil War General Nathan Bedford Forrest had only one: to "[get] there first with the most men". The concepts given as essential in the United States Army Field Manual of Military Operations (FM 3–0) are: Objective type (direct every military operation towards a clearly defined, decisive, and attainable objective) Offensive type (seize, retain, and exploit the initiative) Mass Type (concentrate combat power at the decisive place and time) Economy of force type (allocate minimum essential combat power to secondary efforts) Maneuver type (place the enemy in a disadvantageous position through the flexible application of combat power) Unity of command type (for every objective, ensure unity of effort under one responsible commander) Security type (never permit the enemy to acquire an unexpected advantage) Surprise type (strike the enemy at a time, at a place, or in a manner for which they are unprepared) Simplicity type (prepare clear, uncomplicated plans and clear, concise orders to ensure thorough understanding) According to Greene and Armstrong, some planners assert adhering to the fundamental principles guarantees victory, while others claim war is unpredictable and the strategist must be flexible. Others argue predictability could be increased if the protagonists were to view the situation from the other sides in a conflict. == Development == === Antiquity === The principles of military strategy emerged at least as far back as 500 BC in the works of Sun Tzu and Chanakya. The campaigns of Alexander the Great, Chandragupta Maurya, Hannibal, Qin Shi Huang, Julius Caesar, Zhuge Liang, Khalid ibn al-Walid and, in particular, Cyrus the Great demonstrate strategic planning and movement. Early strategies included the strategy of annihilation, exhaustion, attrition warfare, scorched earth action, blockade, guerrilla campaign, deception and feint. Ingenuity and adeptness were limited only by imagination, accord, and technology. Strategists continually exploited ever-advancing technology. The word "strategy" itself derives from the Greek "στρατηγία" (strategia), "office of general, command, generalship", in turn from "στρατηγός" (strategos), "leader or commander of an army, general", a compound of "στρατός" (stratos), "army, host" + "ἀγός" (agos), "leader, chief", in turn from "ἄγω" (ago), "to lead". === Middle Ages === Through maneuver and continuous assault, Chinese, Persian, Arab and Eastern European armies were stressed by the Mongols until they collapsed, and were then annihilated in pursuit and encirclement. === Early Modern era === In 1520 Niccolò Machiavelli's Dell'arte della guerra (Art of War) dealt with the relationship between civil and military matters and the formation of grand strategy. In the Thirty Years' War (1618-1648), Gustavus Adolphus of Sweden demonstrated advanced operational strategy that led to his victories on the soil of the Holy Roman Empire. It was not until the 18th century that military strategy was subjected to serious study in Europe. The word was first used in German as "Strategie" in a translation of Leo VI's Tactica in 1777 by Johann von Bourscheid. From then onwards, the use of the word spread throughout the West. === Napoleonic === ==== Waterloo ==== === Clausewitz and Jomini === Clausewitz's On War has become a famous reference for strategy, dealing with political, as well as military, leadership, his most famous assertion being: "War is not merely a political act, but also a real political instrument, a continuation of policy by other means." Clausewitz saw war first and foremost as a political act, and thus maintained that the purpose of all strategy was to achieve the political goal that the state was seeking to accomplish. As such, Clausewitz famously argued that war was the "continuation of politics by other means". Clausewitz and Jomini are widely read by US military personnel. === World War I === === Interwar === Technological change had an enormous effect on strategy, but little effect on leadership. The use of telegraph and later radio, along with improved transport, enabled the rapid movement of large numbers of men. One of Germany's key enablers in mobile warfare was the use of radios, where these were put into every tank. However, the number of men that one officer could effectively control had, if anything, declined. The increases in the size of the armies led to an increase in the number of officers. Although the officer ranks in the US Army did swell, in the German army the ratio of officers to total men remained steady. === World War II === Interwar Germany had as its main strategic goals the reestablishment of Germany as a European great power and the complete annulment of the Versailles treaty of 1919. After Adolf Hitler and the Nazi party took power in 1933, Germany's political goals also included the accumulation of Lebensraum ("Living space") for the Germanic "race" and the elimination of communism as a political rival to Nazism. The destruction of European Jewry, while not strictly a strategic objective, was a political goal of the Nazi regime linked to the vision of a German-dominated Europe, and especially to the Generalplan Ost for a depopulated east which Germany could colonize. === Cold War === Soviet strategy in the Cold War was dominated by the desire to prevent, at all costs, the recurrence of an invasion of Russian soil. The Soviet Union nominally adopted a policy of no first use, which in fact was a posture of launch on warning. Other than that, the USSR adapted to some degree to the prevailing changes in the NATO strategic policies that are divided by periods as: Strategy of massive retaliation (1950s) (Russian: стратегия массированного возмездия) Strategy of flexible reaction (1960s) (Russian: стратегия гибкого реагирования) Strategies of realistic threat and containment (1970s) (Russian: стратегия реалистического устрашения или сдерживания) Strategy of direct confrontation (1980s) (Russian: стратегия прямого противоборства) one of the elements of which became the new highly effective high-precision targeting weapons. Strategic Defense Initiative (also known as "Star Wars") during its 1980s development (Russian: стратегическая оборонная инициатива – СОИ) which became a core part of the strategic doctrine based on Defense containment. All-out nuclear World War III between NATO and the Warsaw Pact did not take place. The United States recently (April 2010) acknowledged a new approach to its nuclear policy which describes the weapons' purpose as "primarily" or "fundamentally" to deter or respond to a nuclear attack. === Post–Cold War === Strategy in the post Cold War is shaped by the global geopolitical situation: a number of potent powers in a multipolar array which has arguably come to be dominated by the hyperpower status of the United States. Parties to conflict which see themselves as vastly or temporarily inferior may adopt a strategy of "hunkering down" – witness Iraq in 1991 or Yugoslavia in 1999. The major militaries of today are usually built to fight the "last war" (previous war) and hence have huge armored and conventionally configured infantry formations backed up by air forces and navies designed to support or prepare for these forces. === Netwar === A main point in asymmetric warfare is the nature of paramilitary organizations such as Al-Qaeda which are involved in guerrilla military actions but which are not traditional organizations with a central authority defining their military and political strategies. Organizations such as Al-Qaeda may exist as a sparse network of groups lacking central coordination, making them more difficult to confront following standard strategic approaches. This new field of strategic thinking is tackled by what is now defined as netwar. == See also == == References == === Notes === === Bibliography === Brands, Hal, ed. The New Makers of Modern Strategy: From the Ancient World to the Digital Age (2023) excerpt, 46 essays by experts on ideas of famous strategists; 1200 pp Carpenter, Stanley D. M., Military Leadership in the British Civil Wars, 1642–1651: The Genius of This Age, Routledge, 2005. Chaliand, Gérard, The Art of War in World History: From Antiquity to the Nuclear Age, University of California Press, 1994. Gartner, Scott Sigmund, Strategic Assessment in War, Yale University Press, 1999. Heuser, Beatrice, The Evolution of Strategy: Thinking War from Antiquity to the Present (Cambridge University Press, 2010), ISBN 978-0-521-19968-1. Matloff, Maurice, (ed.), American Military History: 1775–1902, volume 1, Combined Books, 1996. May, Timothy. The Mongol Art of War: Chinggis Khan and the Mongol Military System. Barnsley, UK: Pen & Sword, 2007. ISBN 978-1844154760. Wilden, Anthony, Man and Woman, War and Peace: The Strategist's Companion, Routledge, 1987. == Further reading == The US Army War College Strategic Studies Institute publishes several dozen papers and books yearly focusing on current and future military strategy and policy, national security, and global and regional strategic issues. Most publications are relevant to the International strategic community, both academically and militarily. All are freely available to the public in PDF format. The organization was founded by General Dwight D. Eisenhower after World War II. Black, Jeremy, Introduction to Global Military History: 1775 to the Present Day, Routledge Press, 2005. D'Aguilar, G.C., Napoleon's Military Maxims, free ebook, Napoleon's Military Maxims. Freedman, Lawrence. Strategy: A History (2013) excerpt Holt, Thaddeus, The Deceivers: Allied Military Deception in the Second World War, Simon and Schuster, June, 2004, hardcover, 1184 pages, ISBN 0-7432-5042-7. Tomes, Robert R., US Defense Strategy from Vietnam to Operation Iraqi Freedom: Military Innovation and the New American Way of War, 1973–2003, Routledge Press, 2007.
Wikipedia/Military_strategy
Military science fiction is a subgenre of science fiction and military fiction that depicts the use of science fiction technology, including spaceships and weapons, for military purposes and usually principal characters who are members of a military organization, usually during a war; occurring sometimes in outer space or on a different planet or planets. It exists in a range of media, including literature, comics, film, television and video games. A detailed description of the conflict, belligerents (which may involve extraterrestrials), tactics and weapons used for it, and the role of a military service and the individual members of that military organization form the basis for a typical work of military science fiction. The stories often use features of actual past or current Earth conflicts, with countries being replaced by planets or galaxies with similar characteristics, battleships replaced by space battleships, small arms and artillery replaced by lasers, soldiers replaced by space marines, and certain events changed so the author can extrapolate what might have occurred. == Characteristics == Traditional military values of courage under fire, sense of duty, honor, sacrifice, loyalty, and camaraderie are often emphasized. The action is typically described from the point of view of a soldier in a science fictional setting of or near battle. Typically, the technology is more advanced than that of the present and described in detail. In some stories, however, technology is fairly static, and weapons that would be familiar to present-day soldiers are used, but other aspects of society have changed. Technology may not be emphasized in such stories as much as other aspects of the characters' military lives, cultures, or societies. For example, women may be accepted as equal partners for combat roles, or preferred over men. When the "extravagan[t]" depictions of war in space operas faded along with pulp fiction more generally, military science fiction developed with a "more disciplined and more realistic notion of the kind of armies which might fight interplanetary and interstellar wars, and the kinds of weapons they might use". In many stories, the usage or advancement of a specific technology plays a role in advancing the plot, such as deploying a new weapon or spaceship. Some works draw heavy parallels to human history and how a scientific breakthrough or new military doctrine can significantly change how war is fought, the outcome of a battle, and the fortunes of the combatants. Many works explore how human progress, discovery, and suffering affect military doctrine or battle, and how the protagonists and antagonists reflect on and adapt to such changes. Many authors have either used a galaxy-spanning fictional empire as a background for the story, or have explored the growth and/or decline of such an empire. The capital of a galactic empire is sometimes a "core world," such as a planet relatively near a galaxy's centrally-located supermassive black hole, which has advanced considerably in science and technology compared to current human civilization. Characterizations of these empires can vary wildly from malevolent forces that attack sympathetic victims, to apathetic or amoral bureaucracies, to more reasonable entities focused on social progress. A writer may posit a form of faster-than-light travel in order to facilitate the enormous scale of interstellar war. The long spans of time (e.g., decades or centuries) required for human soldiers to travel interstellar distances, even at relativistic speeds, and the consequences for the characters, is a dilemma examined by authors such as Joe Haldeman and Alastair Reynolds. Other writers such as Larry Niven have created plausible interplanetary conflict based on human colonization of the asteroid belt and outer planets by means of technologies utilizing the laws of physics as currently understood. == Definitions by contrast == Several subsets of military science fiction share characteristics of the space opera subgenre, concentrating on large-scale space battles with futuristic weapons in an interstellar war. Many stories can be considered to be in one or both the military science fiction and space opera subgenres, such as The Sten Chronicles by Allan Cole and Chris Bunch, Ender's Game series by Orson Scott Card, Honorverse by David Weber, Deathstalker by Simon R. Green, and Armor by John Steakley. At one extreme, a military science fiction story can speculate about war in the future, in space, or involving space travel, or the effects of such a war on humans; at the other, a story with a fictional military plot may have relatively superficial science fictional elements. The term "military space opera" may occasionally denote this latter style, as used for example by critic Sylvia Kelso when describing Lois McMaster Bujold's Vorkosigan Saga. Examples that feature aspects of both military science fiction and space opera include the Battlestar Galactica franchise and Robert A. Heinlein's 1959 novel Starship Troopers. A key distinction of military science fiction from space opera is that space operas focus more on adventurous stories and melodrama, while military science fiction focuses more on warfare and technical aspects. The principal characters in a space opera are also not military personnel, but civilians or paramilitary. Stories in both subgenres often concern an interstellar war in which humans fight themselves and/or nonhuman entities. Military science fiction, however, is not necessarily set in outer space or on multiple worlds, as in space opera and the space Western. Both military science fiction and the space Western may consider an interstellar war and oppression by a galactic empire as the story's backdrop. They may focus on a lone gunslinger, soldier, or veteran in a futuristic space frontier setting. Western elements and conventions in military science fiction may be explicit, such as cowboys in outer space, or more subtle, as in a space colony requiring defense against attack out on the frontier. Gene Roddenberry described Star Trek: The Original Series as a Space Western (or more poetically, as "Wagon Train to the stars"). The TV series Firefly and its cinematic follow-up Serenity literalized the Western aspects of the space Western subgenre as popularized by Star Trek: it features frontier towns, horses, and a visual style evocative of classic John Ford Westerns. Worlds that have been terraformed may be depicted as presenting similar challenges as that of a frontier settlement in a classic Western. Six-shooters and horses may be replaced by ray guns and rockets. A "thematic subdivision" of MSF are works where "ex-military protagonists [are] drawing on their battle experience for tough and violent operations in (more or less) civilian life", typically in a law enforcement setting. Some examples include Richard Morgan's Takashi Kovacs book such as Altered Carbon (2002) and Elizabeth Bear's Jenny Casey books, such as Hammered (2004). == History == === 19th century and up to early 20th century === Precursors for military science fiction can be found in "future war" stories dating back at least to George Chesney's story "The Battle of Dorking" (1871). Written just after the Prussian victory in the Franco-Prussian War, it describes an invasion of Britain by a German-speaking country in which the Royal Navy is destroyed by a futuristic wonder-weapon ("fatal engines"). Other works of military science fiction followed, including H.G. Wells's "The Land Ironclads". It described tank-like "land ironclads," 80-to-100-foot-long (24 to 30 m) armoured fighting vehicles that carry riflemen, engineers, and a captain, and are armed with semi-automatic rifles. === Post-WWII era === Eventually, as science fiction became an established and separate genre, military science fiction established itself as a subgenre. One such work is H. Beam Piper's Uller Uprising (1952) (based on the events of the Sepoy Mutiny). Robert A. Heinlein's Starship Troopers (1959) is another work of military science fiction, along with Gordon Dickson's Dorsai (1960), and these are thought to be mostly responsible for popularizing this subgenre's popularity among young readers of the time. The Vietnam War led to the "polarization of the sf community", which can be seen in the June 1968 issue of Galaxy Science Fiction, in which one page of pro-war sf authors listed their names and on another page, anti-war sf authors put their names. The Vietnam War has been noted by the Encyclopedia of Science Fiction as having impacted anthologies such as In the Field of Fire (1987) and novels such as The Healer's War (1988) by Elizabeth Ann Scarborough and Dream Baby (1989) by Bruce McAllister. The Encyclopedia of Science Fiction states that the Vietnam War's influence can be seen indirectly in novels such as Joe Haldeman's The Forever War (published in Analog over 1972–1975) and Lucius Shepard's Life During Wartime (1987). The Vietnam War resulted in veterans with combat experience deciding to write science fiction, including Joe Haldeman and David Drake. Throughout the 1970s, works such as Haldeman's The Forever War and Drake's Hammer's Slammers helped increase the popularity of the genre. Short stories also were popular, collected in books such as Combat SF, edited by Gordon R. Dickson. This anthology includes one of the first Hammer's Slammers stories, as well as one of the BOLO stories by Keith Laumer and one of the Berserker stories by Fred Saberhagen. This anthology seems to have been the first time these stories specifically dealing with war as a subject were collected and marketed as such. The series of anthologies with the group title There Will be War edited by Pournelle and John F. Carr (nine volumes from 1983 through 1990) helped keep the category active, and encouraged new writers to add to it. David Drake wrote stories about future mercenaries, including the Hammer's Slammers series (1979), which follows the career of a future mercenary tank regiment. Drake's series which "helped initiate a fashion for sf about mercenaries", including The Warrior's Apprentice (1986) by Lois McMaster Bujold. A twist was introduced in Harry Turtledove's Worldwar series depicting an alternate history in which WWII is disrupted by extraterrestrials invading Earth in 1942, forcing humans to stop fighting each other and unite against this common enemy. Turtledove depicts the tactics and strategy of this new course of the war in detail, showing how American, British, Soviet, and German soldiers and Jewish guerrillas (some of them historical figures) deal with this extraordinary new situation, as well as providing a not unsympathetic detailed point of view of individual invader warriors. In the war situation posited by Turtledove, the invaders have superior arms, but the gap is not too wide for the humans to bridge. For example, the invaders have more advanced tanks, but the German Wehrmacht's tank crews facing them – a major theme in the series – are more skilled and far more experienced. The Encyclopedia of Science Fiction lists three notable women authors of MSF: Lois McMaster Bujold; Elizabeth Moon (particularly her Familias Regnant stories such as Hunting Party (1993)), and Karen Traviss. == Political themes == Several authors have presented stories with political messages of varying types as major or minor themes of their works. David Drake has often written of the horrors and futility of war. He has said, in the afterwords of several of his Hammer's Slammers books (1979 and later), that one of his reasons for writing is to educate those people who have not experienced war, but who might have to make the decision to start or endorse a war (as policymakers or as voters) about what war is really like, and what the powers and limits of the military as an instrument of policy are. David Weber has said: For me, military science fiction is science fiction which is written about a military situation with a fundamental understanding of how military lifestyles and characters differ from civilian lifestyles and characters. It is science fiction which attempts to realistically portray the military within a science-fiction context. It is not 'bug shoots'. It is about human beings, and members of other species, caught up in warfare and carnage. It isn't an excuse for simplistic solutions to problems. == Practical applications by military == In 1980 and 1981, two science fiction authors inspired President Ronald Reagan's vision for a Strategic Defense Initiative in which satellites would be set up to shoot at nuclear missiles. The two authors were Larry Niven, the author of the Ringworld series, and Jerry Pournelle. Along with like-minded colleagues, they formed a committee to lobby the United States on space issues and influence Reagan's space policies. Pournelle advocated a "robust, technocratic military state". In addition to Pournelle's science fiction writing, he wrote a "paper for the Air Force on stability's role in national security". President Reagan read the space advice that Niven, Pournelle, and their colleagues prepared, which influenced Reagan's 1983 Strategic Defense Initiative. "Niven and Pournelle saw an opportunity to shape the great void in their political image, and Reagan viewed space as yet another tool to defend America against the communist superpower...". Science fiction authors such as Arthur C. Clarke and Isaac Asimov criticized the Strategic Defense Initiative. After the 9/11 terrorism attacks, a group of sci-fi authors called Sigma, including Pournelle and Niven, advised the "Department of Homeland Security on technological strategies for defeating terrorist threats." In 2021, Worldcrunch reported that the French military has hired fiction writers to develop futuristic warfare scenarios, including situations that the military cannot directly study for "ethical reasons, such as Autonomous Lethality Weapon Systems (ALWS), or augmented humans." The French military says the authors are asked to imagine warfare situations that "destabilize us, scare us, blame, or even beat us", in order to provide the army with a "fresh set of practice scenarios". Military planners use the science fiction authors' scenarios to "prepare for previously unthought of situations", "boos[t] creativity" and help the military become "more resourceful." The German military is also using science fiction to help its military but in its approach, they do not hire science fiction writers to develop scenarios. Instead, they "use existing science fiction" to help the army "predict the "world's next potential conflict." The UK Ministry of Defence (MOD) hired two science fiction writers to pen short stories about "what the wars of tomorrow will look like." The MOD hired Peter Warren Singer and August Cole to write eight short stories about threats from "emerging technologies" including "artificial intelligence (AI), data modeling, drone swarms, quantum computing and human enhancement" in a battlefield context. The MOD hired sci-fi writers because they have a "unique ability to imagine the unimaginable." As well, both authors know about "security subjects and modern warfare." They advocate the use of Fictional Intelligence (FicInt), which they define as "useful fictions". FicInt, a concept developed by Cole in 2015, combines "fiction writing with intelligence to imagine future scenarios in ways grounded in reality." == See also == List of military science fiction works and authors Space warfare in science fiction Weapons in science fiction War novel == References ==
Wikipedia/Military_science_fiction
In game theory, Deadlock is a game where the action that is mutually most beneficial is also dominant. This provides a contrast to the Prisoner's Dilemma where the mutually most beneficial action is dominated. This makes Deadlock of rather less interest, since there is no conflict between self-interest and mutual benefit. On the other hand, deadlock game can also impact the economic behaviour and changes to equilibrium outcome in society. == General definition == Any game that satisfies the following two conditions constitutes a Deadlock game: (1) e>g>a>c and (2) d>h>b>f. These conditions require that d and D be dominant. (d, D) be of mutual benefit, and that one prefer one's opponent play c rather than d. Like the Prisoner's Dilemma, this game has one unique Nash equilibrium: (d, D). == Example == In this deadlock game, if Player C and Player D cooperate, they will get a payoff of 1 for both of them. If they both defect, they will get a payoff of 2 for each. However, if Player C cooperates and Player D defects, then C gets a payoff of 0 and D gets a payoff of 3. == Deadlock and social cooperation == Even though deadlock game can satisfy group and individual benefit at mean time, but it can be influenced by dynamic one-side-offer bargaining deadlock model. As a result, deadlock negotiation may happen for buyers. To deal with deadlock negotiation, three types of strategies are founded to break through deadlock and buyer's negotiation. Firstly, using power move to put a price on the status quo to create a win-win situation. Secondly, process move is used for overpowering the deadlock negotiation. Lastly, appreciative moves can help buyer to satisfy their own perspectives and lead to successful cooperation. == References == == External links and offline sources == GameTheory.net C. Hauert: "Effects of space in 2 x 2 games". International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 12 (2002) 1531–1548. Hans‐Ulrich Stark (August 3, 2010). "Dilemmas of partial cooperation". Evolution. 64 (8): 2458–2465. doi:10.1111/j.1558-5646.2010.00986.x. PMID 20199562. S2CID 205782687. Ilwoo Hwang (May 2018). "A Theory of Bargaining Deadlock". Games and Economic Behavior. 109: 501–522. doi:10.1016/j.geb.2018.02.002. Ayça Kaya; Kyungmin Kim (October 2018). "Trading Dynamics with Private Buyer Signals in the Market for Lemons". The Review of Economic Studies. 85 (4): 2318–2352. doi:10.1093/restud/rdy007.
Wikipedia/Deadlock_(game_theory)
In game theory, asynchrony refers to a gameplay structure where interactions and decisions do not occur in uniformly timed rounds. Unlike synchronous systems, where agents act in coordination with a shared timing mechanism, asynchronous systems lack a global clock, allowing agents to operate at independent and arbitrary speeds relative to one another. This flexibility introduces unique strategic dynamics and complexities to the study of decision-making in such environments. For example, in an asynchronous online auction, bidders may place bids at any time before the auction ends, rather than in fixed, simultaneous turns, leading to unpredictable timing strategies and outcomes. == External links == Abraham, I., Alvisi, L., & Halpern, J. Y. (2011). Distributed computing meets game theory: combining insights from two fields. Acm Sigact News, 42(2), 69–76. Ben-Or, M. (1983). Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols. In Proc. 2nd ACM Symp. on Principles of Distributed Computing, pp. 27–30. Solodkin, L., & Oshman, R. (2021). Truthful Information Dissemination in General Asynchronous Networks. In 35th International Symposium on Distributed Computing (DISC 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik. https://drops.dagstuhl.de/opus/volltexte/2021/14839/pdf/LIPIcs-DISC-2021-37.pdf Yifrach, A., & Mansour, Y. (2018, July). Fair leader election for rational agents in asynchronous rings and networks. In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing (pp. 217-226). https://arxiv.org/pdf/1805.04778.pdf == References ==
Wikipedia/Asynchrony_(game_theory)
Market design is an interdisciplinary, engineering-driven approach to economics and a practical methodology for creation of markets of certain properties, which is partially based on mechanism design. In market design, the focus is on the rules of exchange, meaning who gets allocated what and by what procedure. Market design is concerned with the workings of particular markets in order to fix them when they are broken or to build markets when they are missing. Practical applications of market design theory has included labor market matching (e.g. the national residency match program), organ transplantation, school choice, university admissions, and more. == Auction theory == Early research on auctions focused on two special cases: common value auctions in which buyers have private signals of an items true value and private value auctions in which values are identically and independently distributed. Milgrom and Weber (1982) present a much more general theory of auctions with positively related values. Each of n buyers receives a private signal x i {\displaystyle {{x}_{i}}} . Buyer i’s value ϕ ( x i , x − i ) {\displaystyle \phi ({{x}_{i}},{{x}_{-i}})} is strictly increasing in x i {\displaystyle {{x}_{i}}} and is an increasing symmetric function of x − i {\displaystyle {{x}_{-i}}} . If signals are independently and identically distributed, then buyer i’s expected value v i = E x − i { ϕ ( x i , x − i ) } {\displaystyle {{v}_{i}}={{E}_{{x}_{-i}}}\{\phi ({{x}_{i}},{{x}_{-i}})\}} is independent of the other buyers’ signals. Thus, the buyers’ expected values are independently and identically distributed. This is the standard private value auction. For such auctions the revenue equivalence theorem holds. That is, expected revenue is the same in the sealed first-price and second-price auctions. Milgrom and Weber assumed instead that the private signals are “affiliated”. With two buyers, the random variables v 1 {\displaystyle {{v}_{1}}} and v 2 {\displaystyle {{v}_{2}}} with probability density function f ( v 1 , v 2 ) {\displaystyle f({{v}_{1}},{{v}_{2}})} are affiliated if f ( v 1 ′ , v 2 ′ ) f ( v 1 , v 2 ) ≥ f ( v 1 , v 2 ′ ) f ( v 1 ′ , v 2 ) {\displaystyle f({{v}_{1}}^{\prime },{{v}_{2}}^{\prime })f({{v}_{1}},{{v}_{2}})\geq f({{v}_{1}},{{v}_{2}}^{\prime })f({{v}_{1}}^{\prime },{{v}_{2}})} , for all v {\displaystyle v} and all v ′ < v {\displaystyle {v}'<v} . Applying Bayes’ Rule it follows that f ( v 2 ′ | v 1 ′ ) f ( v 2 | v 1 ) ≥ f ( v 2 | v 1 ′ ) f ( v 2 ′ | v 1 ) {\displaystyle f({{v}_{2}}^{\prime }|{{v}_{1}}^{\prime })f({{v}_{2}}|{{v}_{1}})\geq f({{v}_{2}}|{{v}_{1}}^{\prime })f({{v}_{2}}^{\prime }|{{v}_{1}})} , for all v {\displaystyle v} and all v ′ < v {\displaystyle {v}'<v} . Rearranging this inequality and integrating with respect to v 2 ′ {\displaystyle {{v}_{2}}^{\prime }} it follows that F ( v 2 | v 1 ′ ) f ( v 2 | v 1 ′ ) ≥ F ( v 2 | v 1 ) f ( v 2 | v 1 ) {\displaystyle {\frac {F({{v}_{2}}|{{v}_{1}}^{\prime })}{f({{v}_{2}}|{{v}_{1}}^{\prime })}}\geq {\frac {F({{v}_{2}}|{{v}_{1}})}{f({{v}_{2}}|{{v}_{1}})}}} , for all v 2 {\displaystyle {{v}_{2}}} and all v 1 ′ < v 1 {\displaystyle {{v}_{1}}^{\prime }<{{v}_{1}}} . (1) It is this implication of affiliation that is critical in the discussion below. For more than two symmetrically distributed random variables, let V = { v 1 , . . . , v n } {\displaystyle V=\{{{v}_{1}},...,{{v}_{n}}\}} be a set of random variables that are continuously distributed with joint probability density function f(v) . The n random variables are affiliated if f ( x ′ , y ′ ) f ( x , y ) ≥ f ( x , y ′ ) f ( x ′ , y ) {\displaystyle f({x}',{y}')f(x,y)\geq f(x,{y}')f({x}',y)} for all ( x , y ) {\displaystyle (x,y)} and ( x ′ , y ′ ) {\displaystyle ({x}',{y}')} in X × Y {\displaystyle X\times Y} where ( x ′ , y ′ ) < ( x , y ) {\displaystyle ({x}',{y}')<(x,y)} . === Revenue Ranking Theorem (Milgrom and Weber) === Suppose each of n buyers receives a private signal x i {\displaystyle {{x}_{i}}} . Buyer i’s value ϕ ( x i , x − i ) {\displaystyle \phi ({{x}_{i}},{{x}_{-i}})} is strictly increasing in x i {\displaystyle {{x}_{i}}} and is an increasing symmetric function of x − i {\displaystyle {{x}_{-i}}} . If signals are affiliated, the equilibrium bid function in a sealed first-price auction b i = B ( x i ) {\displaystyle {{b}_{i}}=B({{x}_{i}})} is smaller than the equilibrium expected payment in the sealed second price auction. The intuition for this result is as follows: In the sealed second-price auction the expected payment of a winning bidder with value v is based on their own information. By the revenue equivalence theorem if all buyers had the same beliefs, there would be revenue equivalence. However, if values are affiliated, a buyer with value v knows that buyers with lower values have more pessimistic beliefs about the distribution of values. In the sealed high-bid auction such low value buyers therefore bid lower than they would if they had the same beliefs. Thus the buyer with value v does not have to compete so hard and bids lower as well. Thus the informational effect lowers the equilibrium payment of the winning bidder in the sealed first-price auction. === Equilibrium bidding in the sealed first- and second-price auctions === We consider here the simplest case in which there are two buyers and each buyer’s value v i = ϕ ( x i ) {\displaystyle {{v}_{i}}=\phi ({{x}_{i}})} depends only on his own signal. Then the buyers’ values are private and affiliated. In the sealed second-price (or Vickrey auction), it is a dominant strategy for each buyer to bid his value. If both buyers do so, then a buyer with value v has an expected payment of e ( v ) = ∫ 0 v y f ( y | v ) d y F ( v | v ) {\displaystyle e(v)={\frac {\int \limits _{0}^{v}{}yf(y|v)dy}{F(v|v)}}} (2) . In the sealed first-price auction, the increasing bid function B(v) is an equilibrium if bidding strategies are mutual best responses. That is, if buyer 1 has value v, their best response is to bid b = B(v) if they believes that their opponent is using this same bidding function. Suppose buyer 1 deviates and bids b = B(z) rather than B(v). Let U(z) be their resulting payoff. For B(v) to be an equilibrium bid function, U(z) must take on its maximum at x = v. With a bid of b = B(z) buyer 1 wins if B ( v 2 ) < B ( z ) {\displaystyle B({{v}_{2}})<B(z)} , that is, if v 2 < z {\displaystyle {{v}_{2}}<z} . The win probability is then w = F ( z | v ) {\displaystyle w=F(z|v)} so that buyer 1's expected payoff is U ( z ) = w ( v − B ( z ) ) = F ( z | v ) ( v − B ( z ) ) {\displaystyle U(z)=w(v-B(z))=F(z|v)(v-B(z))} . Taking logs and differentiating by z, U ′ ( z ) U ( z ) = w ′ ( z ) w ( z ) − B ′ ( z ) v − B ( z ) = f ( z | v ) F ( z | v ) − B ′ ( z ) v − B ( z ) {\displaystyle {\frac {{{U}^{\prime }}(z)}{U(z)}}={\frac {{w}'(z)}{w(z)}}-{\frac {{B}'(z)}{v-B(z)}}={\frac {f(z|v)}{F(z|v)}}-{\frac {{B}'(z)}{v-B(z)}}} . (3) The first term on the right hand side is the proportional increase in the win probability as the buyer raises his bid from B ( z ) {\displaystyle B(z)} to B ( z + Δ z ) {\displaystyle B(z+\Delta z)} . The second term is the proportional drop in the payoff if the buyer wins. We have argued that, for equilibrium, U(z) must take on its maximum at z = v . Substituting for z in (3) and setting the derivative equal to zero yields the following necessary condition. B ′ ( v ) = f ( v | v ) F ( v | v ) ( v − B ( v ) ) {\displaystyle {B}'(v)={\frac {f(v|v)}{F(v|v)}}(v-B(v))} . (4) === Proof of the revenue ranking theorem === Buyer 1 with value x has conditional p.d.f. f ( v 2 | x ) {\displaystyle f({{v}_{2}}|x)} . Suppose that he naively believes that all other buyers have the same beliefs. In the sealed high bid auction he computes the equilibrium bid function using these naive beliefs. Arguing as above, condition (3) becomes U ′ ( z ) U ( z ) = f ( z | x ) F ( z | x ) − B ′ ( z ) v − B ( z ) {\displaystyle {\frac {{{U}^{\prime }}(z)}{U(z)}}={\frac {f(z|x)}{F(z|x)}}-{\frac {{B}'(z)}{v-B(z)}}} . (3’) Since x > v it follows by affiliation (see condition (1)) that the proportional gain to bidding higher is bigger under the naive beliefs that place higher mass on higher values. Arguing as before, a necessary condition for equilibrium is that (3’) must be zero at x = v. Therefore, the equilibrium bid function B x ( v ) {\displaystyle {{B}_{x}}(v)} satisfies the following differential equation. B x ′ ( v ) = f ( v | x ) F ( v | x ) ( v − B x ( v ) ) {\displaystyle {{B}_{x}}^{\prime }(v)={\frac {f(v|x)}{F(v|x)}}(v-{{B}_{x}}(v))} . (5) Appealing to the revenue equivalence theorem, if all buyers have values that are independent draws from the same distribution then the expected payment of the winner is the same in the two auctions. Therefore, B x ( x ) = e ( x ) {\displaystyle {{B}_{x}}(x)=e(x)} . Thus, to complete the proof we need to establish that B ( x ) ≤ B x ( x ) {\displaystyle B(x)\leq {{B}_{x}}(x)} . Appealing to (1), it follows from (4) and (5) that for all v < x. B x ′ ( v ) ≥ ( v − B x ( v ) v − B ( v ) ) B ( v ) {\displaystyle {{B}_{x}}^{\prime }(v)\geq \left({\frac {v-{{B}_{x}}(v)}{v-B(v)}}\right)B(v)} Therefore, for any v in the interval [0,x] B ( v ) − B x ( v ) > 0 ⇒ B ′ ( v ) − B x ′ ( v ) < 0 {\displaystyle B(v)-{{B}_{x}}(v)>{{0}_{}}{{\Rightarrow }_{}}{B}'(v)-{{B}_{x}}^{\prime }(v)<0} . Suppose that B ( x ) > B x ( x ) {\displaystyle B(x)>{{B}_{x}}(x)} . Since the equilibrium bid of a buyer with value 0 is zero, there must be some y < x such that B ( y ) − B x ( y ) = 0 {\displaystyle {}B(y)-{{B}_{x}}(y)={{0}_{}}} and B ( v ) − B x ( v ) > 0 , ∀ v ∈ [ y , x ] {\displaystyle {}B(v)-{{B}_{x}}(v)>0{{,}_{}}\ \forall v\in [y,x]} . But this is impossible since we have just shown that over such an interval, B ( v ) − B x ( v ) {\displaystyle B(v)-{{B}_{x}}(v)} is decreasing. Since B x ( x ) = e ( x ) {\displaystyle {{B}_{x}}(x)=e(x)} it follows that the winner bidder's expected payment is lower in the sealed high-bid auction. === Ascending auctions with package bidding === Milgrom has also contributed to the understanding of combinatorial auctions. In work with Larry Ausubel (Ausubel and Milgrom, 2002), auctions of multiple items, which may be substitutes or complements, are considered. They define a mechanism, the “ascending proxy auction,” constructed as follows. Each bidder reports his values to a proxy agent for all packages that the bidder is interested in. Budget constraints can also be reported. The proxy agent then bids in an ascending auction with package bidding on behalf of the real bidder, iteratively submitting the allowable bid that, if accepted, would maximize the real bidder's profit (value minus price), based on the reported values. The auction is conducted with negligibly small bid increments. After each round, provisionally winning bids are determined that maximize the total revenue from feasible combinations of bids. All of a bidder's bids are kept live throughout the auction and are treated as mutually exclusive. The auction ends after a round occurs with no new bids. The ascending proxy auction may be viewed either as a compact representation of a dynamic combinatorial auction or as a practical direct mechanism, the first example of what Milgrom would later call a “core selecting auction.” They prove that, with respect to any reported set of values, the ascending proxy auction always generates a core outcome, i.e. an outcome that is feasible and unblocked. Moreover, if bidders’ values satisfy the substitutes condition, then truthful bidding is a Nash equilibrium of the ascending proxy auction and yields the same outcome as the Vickrey–Clarke–Groves (VCG) mechanism. However, the substitutes condition is robustly a necessary as well as a sufficient condition: if just one bidder's values violate the substitutes condition, then with appropriate choice of three other bidders with additively-separable values, the outcome of the VCG mechanism lies outside the core; and so the ascending proxy auction cannot coincide with the VCG mechanism and truthful bidding cannot be a Nash equilibrium. They also provide a complete characterization of substitutes preferences: Goods are substitutes if and only if the indirect utility function is submodular. Ausubel and Milgrom (2006a, 2006b) exposit and elaborate on these ideas. The first of these articles, entitled "The Lovely but Lonely Vickrey Auction", made an important point in market design. The VCG mechanism, while highly attractive in theory, suffers from a number of possible weaknesses when the substitutes condition is violated, making it a poor candidate for empirical applications. In particular, the VCG mechanism may exhibit: low (or zero) seller revenues; non-monotonicity of the seller's revenues in the set of bidders and the amounts bid; vulnerability to collusion by a coalition of losing bidders; and vulnerability to the use of multiple bidding identities by a single bidder. This may explain why the VCG auction design, while so lovely in theory, is so lonely in practice. Additional work in this area by Milgrom together with Larry Ausubel and Peter Cramton has been particularly influential in practical market design. Ausubel, Cramton and Milgrom (2006) together proposed a new auction format that is now called the combinatorial clock auction (CCA), which consists of a clock auction stage followed by a sealed-bid supplementary round. All of the bids are interpreted as package bids; and the final auction outcome is determined using a core selecting mechanism. The CCA was first used in the United Kingdom's 10–40 GHz spectrum auction of 2008. Since then, it has become a new standard for spectrum auctions: it has been utilized for major spectrum auctions in Austria, Denmark, Ireland, the Netherlands, Switzerland and the UK; and it is slated to be used in forthcoming auctions in Australia and Canada. At the 2008 Nemmers Prize conference, Penn State University economist Vijay Krishna and Larry Ausubel highlighted Milgrom's contributions to auction theory and their subsequent impact on auction design. == Matching theory == According to economic theory, under certain conditions, the voluntary exchanges of all economic agents will lead to the maximum welfare of those engaged in the exchanges. In reality, however, the situation is different; We usually face market failures, and of course, we sometimes face conditions or constraints such as congested markets, repugnant markets, and unsafe markets. This is where market designers try to create interactive platforms with specific rules and constraints to achieve optimal situations. It is claimed that such platforms provide maximum efficiency and benefit to society. Matching refers to the idea of establishing a proper relationship between the two sides of the market, the demanders of a good or service and its suppliers. This theory explores who achieves what in economic interactions. The idea for the matching emerged in the form of theoretical efforts by mathematicians such as Shapley and Gale. It matured with the efforts of economists such as Roth, and now market design and matching are of the most important branches of microeconomics and game theory. Milgrom has also contributed to the understanding of matching market design. In work with John Hatfield (Hatfield and Milgrom, 2005), he shows how to generalize the stable marriage matching problem to allow for “matching with contracts”, where the terms of the match between agents on either side of the market arise endogenously through the matching process. They show that a suitable generalization of the deferred acceptance algorithm of David Gale and Lloyd Shapley finds a stable matching in their setting; moreover, the set of stable matchings forms a lattice, and similar vacancy chain dynamics are present. The observation that stable matchings are a lattice was a well known result that provided the key to their insight into generalizing the matching model. They observed (as did some other contemporary authors) that the lattice of stable matchings was reminiscent of the conclusion of Tarski's fixed point theorem, which states that an increasing function from a complete lattice to itself has a nonempty set of fixed points that form a complete lattice. But it wasn't apparent what was the lattice, and what was the increasing function. Hatfield and Milgrom observed that the accumulated offers and rejections formed a lattice, and that the bidding process in an auction and the deferred acceptance algorithm were examples of a cumulative offer process that was an increasing function in this lattice. Their generalization also shows that certain package auctions (see also: Paul Milgrom: Policy) can be thought of as a special case of matching with contracts, where there is only one agent (the auctioneer) on one side of the market and contracts include both the items to be transferred and the total transfer price as terms. Thus, two of market design's great success stories, the deferred acceptance algorithm as applied to the medical match, and the simultaneous ascending auction as applied to the FCC spectrum auctions, have a deep mathematical connection. In addition, this work (in particular, the "cumulative offer" variation of the deferred acceptance algorithm) has formed the basis of recently proposed redesigns of the mechanisms used to match residents to hospitals in Japan and cadets to branches in the US Army. == Application == In general, the topics studied by market designers related to various problems in matching markets. Alvin Roth has divided the obstacles in the matching of the market participants into three main categories: Sometimes, the market participants do not know about each other because of "market thinness." In this case, the market suffers from a lack of enough thickness. In some cases, the cause of dysfunctionality is market congestion and the lack of opportunities for market participants to know each other. In these cases, the excessive market thickness causes the market parties not to have enough time to choose their preferred options. In some markets, due to special arrangements, there is a possibility of strategic behavior by market participants, and therefore people do not really reflect their preferences. In these cases, the market is not safe for expressing actual preferences. The solution of market designers in the face of these problems is to propose the creation of a Centralized Clearing House to receive the preference information of market participants and use appropriate matching algorithms. The aggregation of information, the design of some rules, and the use of these algorithms lead to the appropriate matching of market participants, the safeness of the market environment, and improving market allocation. In this formulation, the mechanism acts as a communication system between the parties of an economic interaction that determines the outcome of this interaction based on pre-determined rules and the signals received from market participants. Therefore, the purpose of market design is simply to determine the rule of the game to optimize the game's outcome. === Market design and matching in the labor market === As mentioned, in some markets, the pricing mechanism may not allocate resources optimally. One such market is the labor market. Usually, employers or firms do not reduce the offered wage to such an extent that supply and demand in the labor market are equal. What is important for firms is to choose exactly "the most appropriate worker." In some labor markets, choosing "the most appropriate employer" is also important for job seekers. Since the process of informing market participants about each other's preferences is disrupted, rules should be designed to improve market performance. === Market design and matching in the kidney transplant market === Another important application of the matching is the kidney transplant market. Kidney transplant applicants often face the problem of the lack of compatible kidneys. Market designers try to make the kidney exchange market more efficient by designing systems to match kidney applicants and kidney donors. Two general types of communication between kidney applicants and donors are chain and cyclical systems of exchanges. In cyclic exchange, kidney donors and recipients form a cycle for kidney exchange. === Simplifying participants’ messages === Milgrom has contributed to the understanding of the effect of simplifying the message space in practical market design. He observed and developed as an important design element of many markets the notion of conflation—the idea of restricting a participant's ability to convey rich preferences by forcing them to enter the same value for different preferences. An example of conflation arises in Gale and Shapley's deferred acceptance algorithm for hospital and doctors matching when hospitals are allowed to submit only responsive preferences (i.e., the ranking of doctors and capacities) even though they could be conceivably asked to submit general substitutes preferences. In the Internet sponsored-search auctions, advertisers are allowed to submit a single per-click bid, regardless of which ad positions they win. A similar, earlier idea of a conflated generic-item auction is an important component of the Combinatorial Clock Auction (Ausubel, Cramton and Milgrom, 2006), widely used in spectrum auctions including the UK's recent 800 MHz / 2.6 GHz auction, and has also been proposed for Incentive Auctions. Bidders are allowed to express only the quantity of frequencies in the allocation stage of the auction without regard to the specific assignment (which is decided in a later assignment stage). Milgrom (2010) shows that with a certain “outcome closure property,” conflation adds no new unintended outcome as equilibrium and argued that, by thickening the markets, may intensify price competition and increase revenue. As a concrete application of the idea of simplifying messages, Milgrom (2009) defines assignment messages of preferences. In assignment messages, an agent can encode certain nonlinear preferences involving various substitution possibilities into linear objectives by allowing agents to describe multiple “roles” that objects can play in generating utility, with utility thus generated being added up. The valuation over a set of objects is the maximum value that can be achieved by optimally assigning them to various roles. Assignment messages can also be applied to resource allocation without money; see, for example, the problem of course allocation in schools, as analyzed by Budish, Che, Kojima, and Milgrom (2013). In doing so, the paper has provided a generalization of the Birkhoff-von Neumann Theorem (a mathematical property about Doubly Stochastic Matrices) and applied it to analyze when a given random assignment can be "implemented" as a lottery over feasible deterministic outcomes. A more general language, endowed assignment message, is studied by Hatfield and Milgrom (2005). Milgrom provides an overview of these issues in Milgrom (2011). == See also == Designing Economic Mechanisms == References == == External links == Nemmer's Prize Lecture, 2008 National Science Foundations LiveScience Program Interview, 2012
Wikipedia/Market_design
From a legal point of view, a contract is an institutional arrangement for the way in which resources flow, which defines the various relationships between the parties to a transaction or limits the rights and obligations of the parties. From an economic perspective, contract theory studies how economic actors can and do construct contractual arrangements, generally in the presence of information asymmetry. Because of its connections with both agency and incentives, contract theory is often categorized within a field known as law and economics. One prominent application of it is the design of optimal schemes of managerial compensation. In the field of economics, the first formal treatment of this topic was given by Kenneth Arrow in the 1960s. In 2016, Oliver Hart and Bengt R. Holmström both received the Nobel Memorial Prize in Economic Sciences for their work on contract theory, covering many topics from CEO pay to privatizations. Holmström focused more on the connection between incentives and risk, while Hart on the unpredictability of the future that creates holes in contracts. A standard practice in the microeconomics of contract theory is to represent the behaviour of a decision maker under certain numerical utility structures, and then apply an optimization algorithm to identify optimal decisions. Such a procedure has been used in the contract theory framework to several typical situations, labeled moral hazard, adverse selection and signalling. The spirit of these models lies in finding theoretical ways to motivate agents to take appropriate actions, even under an insurance contract. The main results achieved through this family of models involve: mathematical properties of the utility structure of the principal and the agent, relaxation of assumptions, and variations of the time structure of the contract relationship, among others. It is customary to model people as maximizers of some von Neumann–Morgenstern utility functions, as stated by expected utility theory. == Development and origin == Contract theory in economics began with 1991 Nobel Laureate Ronald H. Coase's 1937 article "The Nature of the Firm". Coase notes that "the longer the duration of a contract regarding the supply of goods or services due to the difficulty of forecasting, then the less likely and less appropriate it is for the buyer to specify what the other party should do." That suggests two points, the first is that Coase already understands transactional behaviour in terms of contracts, and the second is that Coase implies that if contracts are less complete then firms are more likely to substitute for markets. The contract theory has since evolved in two directions. One is the complete contract theory and the other is the incomplete contract theory. === Complete contract theory === Complete contract theory states that there is no essential difference between a firm and a market; they are both contracts. Principals and agents are able to foresee all future scenarios and develop optimal risk sharing and revenue transfer mechanisms to achieve sub-optimal efficiency under constraints. It is equivalent to principal-agent theory. Armen Albert Alchian and Harold Demsetz disagree with Coase's view that the nature of the firm is a substitute for the market, but argue that both the firm and the market are contracts and that there is no fundamental difference between the two. They believe that the essence of the firm is a team production, and that the central issue in team production is the measurement of agent effort, namely the moral hazard of single agents and multiple agents. Michael C. Jensen and William Meckling believe that the nature of a business is a contractual relationship. They defined a business as an organisation. Such an organisation, like the majority of other organisations, as a legal fiction whose function is to act as a connecting point for a set of contractual relationships between individuals. James Mirrlees and Bengt Holmström et al. developed a basic framework for single-agent and multi-agent moral hazard models in a principal-agent framework with the help of the favourable labour tool of game theory. Eugene F. Fama et al. extend static contract theory to dynamic contract theory, thus introducing the issue of principal commitment and the agent's reputation effect into long-term contracts. Eric Brousseau and Jean-Michel Glachant believe that contract theory should include incentive theory,incomplete contract theory and the new institutional transaction costs theory. == Main models of agency problems == === Moral hazard === The moral hazard problem refers to the extent to which an employee's behaviour is concealed from the employer: whether they work, how hard they work and how carefully they do so. In moral hazard models, the information asymmetry is the principal's inability to observe and/or verify the agent's action. Performance-based contracts that depend on observable and verifiable output can often be employed to create incentives for the agent to act in the principal's interest. When agents are risk-averse, however, such contracts are generally only second-best because incentivization precludes full insurance. The typical moral hazard model is formulated as follows. The principal solves: max w ( ⋅ ) E [ y ( e ^ ) − w ( y ( e ^ ) ) ] {\displaystyle \max _{w(\cdot )}E\left[y({\hat {e}})-w(y({\hat {e}}))\right]} subject to the agent's "individual rationality (IR)" constraint, E [ u ( w ( y ( e ^ ) ) ) − c ( e ^ ) ] ≥ u ¯ {\displaystyle E\left[u(w(y({\hat {e}})))-c({\hat {e}})\right]\geq {\bar {u}}} and the agent's "incentive compatibility (IC)" constraint, E [ u ( w ( y ( e ^ ) ) ) − c ( e ^ ) ] ≥ E [ u ( w ( y ( e ) ) ) − c ( e ) ] ∀ e {\displaystyle E\left[u(w(y({\hat {e}})))-c({\hat {e}})\right]\geq E\left[u(w(y(e)))-c(e)\right]\ \forall e} , where w ( ⋅ ) {\displaystyle w(\cdot )} is the wage for the agent as a function of output y {\displaystyle y} , which in turn is a function of effort: e {\displaystyle e} . c ( e ) {\displaystyle c(e)} represents the cost of effort, and reservation utility is given by u ¯ {\displaystyle {\bar {u}}} . u ( ⋅ ) {\displaystyle u(\cdot )} is the "utility function", which is concave for the risk-averse agent, is convex for the risk-prone agent, and is linear for the risk-neutral agent. If the agent is risk-neutral and there are no bounds on transfer payments, the fact that the agent's effort is unobservable (i.e., it is a "hidden action") does not pose a problem. In this case, the same outcome can be achieved that would be attained with verifiable effort: The agent chooses the so-called "first-best" effort level that maximizes the expected total surplus of the two parties. Specifically, the principal can give the realized output to the agent, but let the agent make a fixed up-front payment. The agent is then a "residual claimant" and will maximize the expected total surplus minus the fixed payment. Hence, the first-best effort level maximizes the agent's payoff, and the fixed payment can be chosen such that in equilibrium the agent's expected payoff equals his or her reservation utility (which is what the agent would get if no contract was written). Yet, if the agent is risk-averse, there is a trade-off between incentives and insurance. Moreover, if the agent is risk-neutral but wealth-constrained, the agent cannot make the fixed up-front payment to the principal, so the principal must leave a "limited liability rent" to the agent (i.e., the agent earns more than his or her reservation utility). The moral hazard model with risk aversion was pioneered by Steven Shavell, Sanford J. Grossman, Oliver D. Hart, and others in the 1970s and 1980s. It has been extended to the case of repeated moral hazard by William P. Rogerson and to the case of multiple tasks by Bengt Holmström and Paul Milgrom. The moral hazard model with risk-neutral but wealth-constrained agents has also been extended to settings with repeated interaction and multiple tasks. While it is difficult to test models with hidden action empirically (since there is no field data on unobservable variables), the premise of contract theory that incentives matter has been successfully tested in the field. Moreover, contract-theoretic models with hidden actions have been directly tested in laboratory experiments. === Example of possible solution to moral hazard === A study on the solution to moral hazard concludes that adding moral sensitivity to the principal–agent model increases its descriptiveness, prescriptiveness, and pedagogical usefulness because it induces employees to work at the appropriate effort for which they receive a wage. The theory suggests that as employee work efforts increase, so proportional premium wage should increases also to encourage productivity. === Adverse selection === In adverse selection models, the principal is not informed about a certain characteristic of the agent at the time the contract is written. The characteristic is called the agent's "type". For example, health insurance is more likely to be purchased by people who are more likely to get sick. In this case, the agent's type is his or her health status, which is privately known by the agent. Another prominent example is public procurement contracting: The government agency (the principal) does not know the private firm's cost. In this case, the private firm is the agent and the agent's type is the cost level. In adverse selection models, there is typically too little trade (i.e., there is a so-called "downward distortion" of the trade level compared to a "first-best" benchmark situation with complete information), except when the agent is of the best possible type (which is known as the "no distortion at the top" property). The principal offers a menu of contracts to the agent; the menu is called "incentive-compatible" if the agent picks the contract that was designed for his or her type. In order to make the agent reveal the true type, the principal has to leave an information rent to the agent (i.e., the agent earns more than his or her reservation utility, which is what the agent would get if no contract was written). Adverse selection theory has been pioneered by Roger Myerson, Eric Maskin, and others in the 1980s. More recently, adverse selection theory has been tested in laboratory experiments and in the field. Adverse selection theory has been expanded in several directions, e.g. by endogenizing the information structure (so the agent can decide whether or not to gather private information) and by taking into consideration social preferences and bounded rationality. === Signalling === In signalling models, one party chooses how and whether or not to present information about itself to another party to reduce the information asymmetry between them. In signaling models, the signaling party agent and the receiving party principal have access to different information. The challenge for the receiving party is to decipher the credibility of the signaling party so as to assess their capabilities. The formulation of this theory began in 1973 by Michael Spence through his job-market signaling model. In his model, job applicants are tasked with signalling their skills and capabilities to employers to reduce the probabilities for the employer to choose a lesser qualified applicant over a qualified applicant. This is because potential employers lack the knowledge to discern the skills and capabilities of potential employees. == Incomplete contracts == Contract theory also utilizes the notion of a complete contract, which is thought of as a contract that specifies the legal consequences of every possible state of the world. More recent developments known as the theory of incomplete contracts, pioneered by Oliver Hart and his coauthors, study the incentive effects of parties' inability to write complete contingent contracts. In fact, it may be the case that the parties to a transaction are unable to write a complete contract at the contract stage because it is either difficult to reach an agreement to get it done or it is too expensive to do so, e.g. concerning relationship-specific investments. A leading application of the incomplete contracting paradigm is the Grossman-Hart-Moore property rights approach to the theory of the firm (see Hart, 1995). Because it would be impossibly complex and costly for the parties to an agreement to make their contract complete, the law provides default rules which fill in the gaps in the actual agreement of the parties. During the last 20 years, much effort has gone into the analysis of dynamic contracts. Important early contributors to this literature include, among others, Edward J. Green, Stephen Spear, and Sanjay Srivastava. == Expected utility theory == Much of contract theory can be explained through expected utility theory. This theory indicates that individuals will measure their choices based on the risks and benefits associated with a decision. A study analyzed that agents' anticipatory feelings are affected by uncertainty. Hence why principals need to form contracts with agents in the presence of information asymmetry to more clearly understand each party's motives and benefits. == Examples of contract theory == George Akerlof described adverse selection in the market for used cars. In certain models, such as Michael Spence's job-market model, the agent can signal his type to the principal which may help to resolve the problem. Leland and Pyle's (1977) IPO theory for agents (companies) to reduce adverse selection in the market by always sending clear signals before going public. == Incentive Design == In the contract theory, the goal is to motivate employees by giving them rewards. Trading on service level/quality, results, performance or goals. It can be seen that reward determines whether the incentive mechanism can fully motivate employees. In view of the large number of contract theoretical models, the design of compensation under different contract conditions is different. === Rewards on Absolute Performance and Relative Performance === Source: Absolute performance-related reward: The reward is in direct proportion to the absolute performance of employees. Relative performance-related reward: The rewards are arranged according to the performance of the employees, from the highest to the lowest. Absolute performance-related reward is an incentive mechanism widely recognized in economics in the real society, because it provides employees with the basic option of necessary and effective incentives. But, absolute performance-related rewards have two drawbacks. There will be people who cheat Vulnerable to recessions or sudden growth === Design contracts for multiple employees === Source: Considering absolute performance-related compensation is a popular way for employers to design contracts for more than one employee at a time, and one of the most widely accepted methods in practical economics. There are also other forms of absolute rewards linked to employees' performance. For example, dividing employees into groups and rewarding the whole group based on the overall performance of each group. But one drawback of this method is that some people will fish in troubled waters while others are working hard, so that they will be rewarded together with the rest of the group. It is better to set the reward mechanism as the competitive competition, and obtain higher rewards through better performance. === Information elicitation === A particular kind of a principal-agent problem is when the agent can compute the value of an item that belongs to the principal (e.g. an assessor can compute the value of the principal's car), and the principal wants to incentivize the agent to compute and report the true value. == See also == Agency cost Allocative efficiency Efficient contract theory Clawback Complete contract Contract Contract awarding Default rule First-order approach Incomplete contracts Mechanism design New institutional economics Perverse incentive == References == == External links == Bolton, Patrick and Dewatripont, Mathias, 2005.: Contract Theory. MIT Press. Description and preview. Laffont, Jean-Jacques, and David Martimort, 2002. The Theory of Incentives: The Principal-Agent Model. Description, "Introduction," Archived 2016-06-12 at the Wayback Machine & down for chapter links. (Princeton University Press, 2002) Martimort, David, 2008. "contract theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Salanié, Bernard, 1997. The Economics of Contracts: A Primer. MIT Press, Description (2nd ed., 2005) and chapter-preview links.
Wikipedia/Contract_theory
In two-or-more-player sequential games, a ply is one turn taken by one of the players. The word is used to clarify what is meant when one might otherwise say "turn". The word "turn" can be a problem since it means different things in different traditions. For example, in standard chess terminology, one move consists of a turn by each player; therefore a ply in chess is a half-move. Thus, after 20 moves in a chess game, 40 plies have been completed—20 by white and 20 by black. In the game of Go, by contrast, a ply is the normal unit of counting moves; so for example to say that a game is 250 moves long is to imply 250 plies. In poker with n players the word "street" is used for a full betting round consisting of n plies; each dealt card may sometimes also be called a "street". For instance, in heads up Texas hold'em, a street consists of 2 plies, with possible plays being check/raise/call/fold: the first by the player at the big blind, and the second by the dealer, who posts the small blind; and there are 4 streets: preflop, flop, turn, river (the latter 3 corresponding to community cards). The terms "half-street" and "half-street game" are sometimes used to describe, respectively, a single bet in a heads up game, and a simplified heads up poker game where only a single player bets. The word "ply" used as a synonym for "layer" goes back to the 15th century. Arthur Samuel first used the term in its game-theoretic sense in his seminal paper on machine learning in checkers in 1959, but with a slightly different meaning: the "ply", in Samuel's terminology, is actually the depth of analysis ("Certain expressions were introduced which we will find useful. These are: Ply, defined as the number of moves ahead, where a ply of two consists of one proposed move by the machine and one anticipated reply by the opponent"). In computing, the concept of a ply is important because one ply corresponds to one level of the game tree. The Deep Blue chess computer which defeated Kasparov in 1997 would typically search to a depth of between six and sixteen plies to a maximum of forty plies in some situations. == See also == Minimax algorithm == References == == Further reading == Levy, David; Newborn, Monty (1991), How Computers Play Chess, Computer Science Press, ISBN 978-0-7167-8121-9 == External links == The dictionary definition of ply at Wiktionary
Wikipedia/Ply_(game_theory)